playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
english_literature_lectures | 4_Inferno_V_VI_VII.txt | Prof: Last time I finished-- we finished on a little note, as you'll recall, that the detail of the garden where the pilgrim finds himself and meets the other poets. And he declares, in a way that seems to be really prideful, on his place in this trajectory, this literary poetic tradition. I was emphasizing last time that this is a detail that opens for us-- opens our eyes to the ambiguity of gardens, the ambiguity--as Dante will go on dramatizing this idea of this ambiguity of gardens throughout Purgatory especially, and in other areas, in an oblique way. It's not necessarily monotonously bucolic language, this idea of the ambiguity of gardens. What are some of these ambiguities in Canto IV? We are drawn naturally to gardens and we are drawn to gardens because they reflect for us some image of order, especially if you're traveling through Hell, then you do want this sort of--you explore, you enter willfully this place that bears the fingerprints of the human hand, has this--it's something which had elaborated by human beings. This is a divine place, nonetheless gardens mean that for us. But at the same time they give us a sense of security and in its enclosure, also a sense of a lordship over them. It's something we can control, it's something that we see and where we feel we belong. This is exactly the temptation that the pilgrim experiences in Canto IV. He relaxes, and this happens to all the heroes in the epic tradition, when they enter gardens they even set aside their arms. They get disarmed in more ways than one. That is to say, they come to understand that they are-- this is a place of shelter, a place which is so peaceful and idyllic that one is no longer-- or need not fear that one is in danger. In effect, that's where the danger is most powerful. Dante experiences a danger, the danger he experience is that of a poetic hubris. He is descending into humility, that's the trajectory of his journey, and there he rests with Homer, Virgil, Lucan, etc., and he just says he feels that he belongs-- that his high genius allows him to be right there with them. I remind you of this little detail exactly because it allows me to say more precisely what the problems are that a presentation of gardens, but especially to emphasize that Canto V and the drama that is unfolded in Canto V, a drama ostensibly of desire. It's a story of the great passion of a woman, one of the most famous women in literature, Francesca, has with her brother-in-law Paolo. But, the point is that that drama stems directly from the crisis in the pilgrim's mind in Canto IV of Inferno. In what way? It is as if the experience of hubris, about the celebrating one's own power and prowess as a poet, now has to confront the consequences of that claim. Now Dante comes literally face to face with a reader of his poetry, and the reader of his poetry who understands his poetry in a way that was not necessarily the one intended by its author. You have now in Canto V the confrontation of reader and poet, and we shall see Francesca is, of course, as you remember from your reading, having read Canto V, is a great reader of text. She goes on quoting Lancelot, not in the version of Chrétien de Troyes, but it's a parallel version. It's the same romance, she goes on quoting from The Art Of Courtly Love, this text about the art of love by Andreas Capellanus, and you may remember I alluded to at least one of the earlier talks, and goes on actually quoting to Dante, Dante's own poem in the Vita nuova, which we shall go and look at in a while. Let's start with Canto V. Where are we in the poem? Where are we located? We are in the second circle. This--your notes will tell you. We're in the larger area of so-called incontinence and I really should emphasize to you something about-- we shall look at it in more detail further on, but something about the topography, the moral topography of Hell. What is the disposition? What is the distribution of sins and sinfulness? What is actually sin? What are we to understand for sin? For the time being I'll tell you that for Dante, it's the will which is the locus of sin. You cannot really sin intellectually; you cannot have--commit sins with your mind. You can have your mind which partakes and becomes an accomplice of the will, but it's primarily in the will, in the voluntary action that you find sinfulness. That's the first thing. Where are we now? In the area of incontinence. What does that mean? Well one thing, a way of making it very simple, you probably should know that the shape, the diagram of the soul for Dante is very classical, very ancient, it's really Aristotelian, it's the idea of its more or less figured as a triangle which-- on the left side you have, because it's always the left-- the will--the area of the will and then on the right side you have the area of reason. Where the two faculties of the soul, there are two faculties, like two feet of the body-- there are two faculties of the soul that where they meet it's-- well in the Middle Ages, using a classical term they call, synteresis. This is the area where free will--in other words, in free will you have a conjunction of both will and reason and that's the beginning of the moral life. It's not the end of it at all, it's really when-- only when you are really free, your will is free, that you can start making decisions and getting engaged in the world around you. Now the soul is divided into three parts. It's a tripartite structure and begins at the bottom, it's so called--I should put it on this side because it's a will. The concupiscent appetites, which is really what Francesca experiences, the incontinence lost in this form later will be gluttony, etc., avarice, prodigality. In the middle area here, you would have the sensitive appetite, which is really the middle ground of Dante's Hell, violence, the kind of bestiality that takes over the human mind, and then the third is the rational. The order, the geometry of Hell, in a way, is patterned on the order of the soul, the idea of the soul, in of course--in an inverted form. We begin in the area of concupiscence, the area of lust. Someone was asking me what was lust last time; I think that we're going to have some kind of understanding about this. This is where we are in the area of incontinence, the first one is lust, or what Dante will call with a formula: it's the area of the sinners who have inverted the order, the hierarchical order of the reason and the will. They have made pleasure--they have invested pleasure with supreme lordship over the order of rationality. So, reason, though somehow dimmed, is always going to be used as a rational to explain-- as a kind of way of creating alibis for the passion of Francesca. This is the way the canto begins. The second thing that I have to mention as we read here, is the particular landscape that Dante evokes. It's a landscape of souls that go around, swirling around in a kind and sort of circular structure. Let me tell you a little detail here, that you have to be careful as you read the poem even about the directions of the pilgrim. For instance, if I were to ask you which way is Dante descending into this spiraled Hell? When you move into a spiral, it's very difficult to see if you're really going left or right of course, but he's going out of the way that he's always going leftward. Because he's descending--and as soon as we get to Purgatory, he goes out of the way to tell us that he's now going rightward, which is to say, that Hell is the inverted cosmos of Purgatorio. So it's really--he's always going the same way, only that as he goes into Hell it's--he's going down and he's inverted. When he has to go from Hell to Purgatorio, the operation is going to be that of turning upside down in order to go finally in the straight way, the right way. The other detail is that the symbolism of the circle, which as you know, is very ancient, very old. There are a number of ways of understanding direction in the Middle Ages. For instance, the linear direction implies that of human beings were caught in time and they are going to some kind of purpose or precise destination. The angels are those who circle around the throne of God so that the circle implies the plentitude and perfection of movement. Clearly, Francesca is involved, who is caught in a world of love, in the passion of love, she's giving a kind of parodic version, a caricature of the circular perfect movement of the mind, and of the angels around the divinity. The spiral, which is the movement of the pilgrim, combines line and circle; implies that Dante is really--the mind is going in a circular way around the divinity, but he also has a purpose, has an aim to reach. Here the two are--Francesca and Paolo are going around in circles, circles that will have--and they will experience no rest. I think that the principle behind this representation of desire is displacement; desire is always a part of displacement. Something that Dante valorizes greatly, that's the ambiguity of Dante's thinking. Desire is displacement because in this case, Paolo and Francesca--they get nowhere and yet it's exactly this displacement that makes us aware that we are never where we should be, that our hearts are always out of place. It's what Augustine says in the Confessions that the-- he begins the Confessions with the awareness of his heart, he says, is unquiet, that the idea of the unquietness of the heart out of place, so that's where he's enacting. Dante's moving within the larger pattern of Augustine's thinking about desire and there will be a lot of talk about that. You know what the word "desire" by the way, which is in English as the same as it is in Italian or Latin, you know what it means? It's linked to the stars, to have desire is to know that you are not quite sidera, at the end de sidera, we are sort of a removal, removed from the world of stars. It's a word that is linked, usually its "consideration," another word that implies that the mind moves alongside, now you consider-- when I consider how I like suspense, when you consider is a way of moving with-- along the mind manages to move with the circularity and perfection of the sun. All of this irrelevant to the point that's at hand here. Dante meets--so we are in the world of--begins this canto with a number of metaphors of birds. You realize that, first of all, he starts around lines 30, about the "hellish storm." It's the externalizing of the storm inside, the inner storm, "never resting, seizes and drives the spirits before it; smiting and whirling them about," etc. It continues, "As in the cold season, their wings bear the starlings along the broad, dense flock, so does that blast the wicked spirits. Hither, thither, downward, upward, it drives them; no hope ever comforts them, not to say of rest but of less pain." And then the cranes. And Dante asks Virgil, "Master, who are these people whom the black air so scourges?" And now we have an enumeration, another application of the epic--an epic device, enumerating. The epic that has--it's always driven by the desire for totality to include all things within the compass of its representation. It always has the enumerative style and now here we have a number of figures that Dante points out--that Virgil points out. And they're all queens at the beginning. They are founders of cities. Keep this in mind because I think that part of the issues that Dante is raising, and you can think about it, we can talk about it if you wish, is the relationship between eros and politics. Pleasure and the city. Where does pleasure--what is the place of pleasure in the economy of the city? Let's see who they are, one is the "Empress of peoples of many tongues, who's so corrupted by licentious vice that she made lust lawful in her law to take away the scandal into which she was brought." And the emphasis of the line is this lust becoming lawful, lust becoming public and accepted. And "she is Semiramis," of Assyria, "of whom we read she succeeded Ninus." Then the next one is Dido, who is both Virgil's invention in many ways, where Virgil in the Aeneid; this is a reflection on the Aeneid as a poem of love too. Dante can not but think about the place of how Rome, Rome's conquest would appear to be libido of power, libido dominandi and yet--and he's really playing with the idea that the-- Rome or Roma as you know is the-- what we call the--I'm going to have to use this term because I can't think of an English term, boustrophedon. You know what it means, the boustrophedon, right? A boustrophedon, it's very easy, it's a Greek term meaning a reversal. Roma, as in a mirror, becomes Amor, but it's--Venus is the mother of Aeneas. So there is this idea again, of a link and inner link between love, or love and politics and the city. And Virgil writes the Aeneid, literally, as a love poem. That is to say, that the ideology of Rome is an ideology of--based on--of Rome, is an ideology based on desire. The idea, which Augustine will counter by saying, yeah this is not really love, this is lust for power and the distinction that someone was raising here, the gentleman was raising last time about how lust is related to love. You already start seeing the antagonism between the two of them. Augustine, a Roman, an African, but a Roman thinker--was really writing about and belongs and reflects on the great myths, on the mythology of Rome. And for him this is true in the Confessions, but it's especially true in The City of God, where he juxtaposes the earthly city, Rome to the heavenly city, the heavenly Jerusalem. The two cities are opposed to each other. There he reflects on Rome as a city based on lust for power, and from that point of view, really not different from any other empires. They're all Rome, like say the Persian Empire, the Greek claims for empire, and what not, are all part of a long sequel of violence and imperial fantasies. Dante is thinking along these lines and we shall see where that will take him in a moment. Then there is Cleopatra of Egypt, Helen and the story of the fall of Troy. Then finally, the story of Tristan, who as you know belongs--is really a medieval invention: Tristan and Isolde. We are going to see now Lancelot and Guinevere in a moment. The presence of Tristan shows one thing, that all the heroes and heroines of antiquity are viewed through the lenses of medieval romances. They may belong to the grand epics of the classical world. Dante will see them through that optic of romances, the literature of desire. "And he showed me more than a thousand shades, naming them as he pointed, whom love parted from our life." This is the catalog of--the epic catalog. "When I heard my Teacher name the knights and ladies of old times, pity came upon me, and I was as one bewildered." Now this is really the first time that Dante introduces the notion of pity in the poem. And we shall see by the end of the canto, that he is going to be overwhelmed by pity and he is going to faint after he hears the story of Francesca. He was so overwhelmed that he fell, he says, "like a dead body" falls. It's a fainting. It's sympathy; maybe it's a little bit of a self-recognition, maybe it's--we shall see that it's a way of coming to grips with his own responsibilities, maybe some of his responsibilities. The point I want to make there with this pity is that you do know, but Dante does not know the the Poetics of Aristotle, but he knows whatever is available through Horace and he knows quite a lot. The point here is that Dante goes on reflecting-- it could become a paper topic for some of you enterprising spirits, for some of you maybe--on the relationship within pity and justice. Throughout the poem he goes on thinking about these two terms. Does justice necessarily need pity or is there some kind of justice that must learn how to be pitiless, that has no place for this kind of compassion. Are they two necessarily antagonistic or is there some way of thinking of a meeting point between them? This is the first time he introduces this idea of pity, a kind of recognition; a sense that it could be he who is in that position. And he begins, "Poet I would fain speak with these two that go together and seem so light upon the wind." He doesn't talk to any of the major classical figures. He chooses two people from his own time, two people from the ordinary life around him, two people in the--by this time Dante may very well be living in that area of Italy which is Ravenna, not quite Ravenna, but in that area of Ravenna. "'Thou shalt see when they are nearer us, and do thou entreat them then by the love that leads them, and they will come.' As soon as the wind bent... 'Oh wearied souls, come and speak to us-- and speak with us, if One forbids it not.'" You realize that the name of God is never mentioned here in Hell, if not as a discourse that takes place here on Earth, but the souls in Hell will always use periphrastic constructions, terms or phrases; as if it would be highly improper for Dante to allow them, or even for them, to acknowledge that which they never really acknowledged: "now if One forbids it not." Then, "as doves summoned by desire, come with wings, poised and motionless to the sweet nest, borne by their will through the air, so these left the troop where Dido is." Again, the presence of Dido that--the Virgilian myth of Dido and also the other possibility of Rome: Virgil is writing about the great battle between Carthage and Rome as two ways of choosing a civilization, two ways of deciding how one should organize one, how should one experiment with cities. "Coming to us through a malignant air; such force had my loving call." Now listen to how Francesca speaks, "Oh living creature, gracious and friendly, who goes through the murky air, visiting us who stained the world with blood." She's killed, she was killed, by the way, by her husband, who caught Francesca and his brother Paolo in a tryst, so it's--that's what the allusion to the blood is. "If the King of the universe were our friend, we would pray to Him for thy peace, since thou hast pity of our evil plight. Of that which thou pleased to hear and speak, we will hear and speak with you while the wind is quiet as here it is." Now she begins the description of her life, where she was born. Most of the narratives in Inferno begin with this idea of birth. You saw that in the case of Virgil and you see it once here in the case of Francesca. They begin with birth for a number of reasons, but because birth is for Dante the event that somehow could potentially have changed and have imparted a different direction to the world, or could end in nothing, as in the case of Francesca. Hence a great piece of literature, but she herself did not really achieve much. Now she talks about her city, in terms that clearly contrast with this movement of the souls caught in the storm. There they go endlessly in the air and now she evokes the place of what she really wants is rest, "The city where I was born lies." That's the image of the stability of a city she has lost. "Where the Po, with the streams adjoin it, descends to rest." Now three tercets in Italian all beginning with the word love. Love made into kind of transcendent divinity. It is the great subject of her experience, look at this. "Love which is quickly kindled in the gentle heart, seized this man for the fair form that was taken from me, and the manner afflicts me still. Love, which absolves no one beloved from loving, seized me so strongly with his charm that, as thou seest, it does not leave me yet. Love brought us to one death." What is she saying? Well, a number of things, and I really have to give this to you. First of all, she's really quoting important literature. The first line "Love which," the translation is, "Love which absolves no-- love which is quickly kindled in the gentle heart," and you know that this is really a quotation from one of Dante's sonnets in the Vita nuova that you read, Chapter XX. "Love," that's how Dante starts, "Love and the gracious heart are a single thing," and Dante quotes the poetics of a sweet new sound. It's when it's early he says, tells us in his poem, one can more be without the other, one can no more be without the other than one that then can the reasoning mind without it's reason," etc. It's clearly meant for Francesca to flatter the sensible authorship of the poet himself. It's part of a seductive strategy also that she can use. The second image, but love that does not allow anyone who loves from returning, reciprocating the love, it really comes from the so-called ruse of love that Marie de Champagne dictates in Book III of The Art of Courtly Love and I want to read this. It's--the translation is not quite--all that accurate but I think--I'm sorry I got the wrong one, the wrong book. The Art of Courtly Love, Book III, and these are the famous--it ends with the rules of love. I will--I'll explain what they are and Rule 9 says--I can really read some of them to you so you have an understanding of what courtly love is. This is the mind of--which applies very well to Francesca. Francesca imagines herself as really a courtly love heroine. She lives in the world of kings, the God is the king of the universe, she's in the court of the king of love maybe. These are some of the concerns of the rules of love and The Art of Courtly Love. "Marriage is no real excuse for not loving." It's a way of saying adultery is the law of courtly love. "He who is not jealous," number 2, "cannot love." No one can be bound by a double love than the boys do not love until they arrive at the age of maturity. It leaves that very unclear what the age of maturity can be; 7, "when one lover dies, a widowhood of two years is required of the survivor." Number 8, "no one should be deprived of love by the very best of reasons." Number 9, "no one can love unless one is impelled by somebody else's love," which is exactly the line that Francesca mentions, number 9. Why these rules of love? What are they? What is she saying? What Andreas, first of all, is doing by having these rules of love and having-- and reducing love to an art, is a way of acknowledging that love is the most transgressive, disruptive of all experiences and therefore it needs to be formalized. It needs to be contained. It may be part of a game, as is perhaps the thrust of Andreas Cappellanus' thinking, or made to be part of an acceptable ceremony, which is the possible reading of what is happening. Francesca falls completely, squarely within this tradition of believing that she lives in a world of love where there is no other possible resistance. In effect, these tercets with which I read to you above love, love, and love: they are really meant to cast love as a transcendent force that no one can really-- that she at least, cannot withstand. What she is doing is abdicating the power of her will to the irresistible, omnipotent, presence of this love. It's part of a strategy, of not acknowledging any responsibility. It's part of a strategy to instead find for herself an alibi: I was made to do that. The literature of yours and the literature of Andreas Cappellanus' were filters of love. You understand what I mean--how in romances you always have filters of love. No one is going to take the responsibility saying well the--I had too much to drink or I read the great poem or whatever, and so I was doing that. It's a way for Dante to show the blindness of Francesca to the reality of her situation, and this--where she is, a kind of unwillingness to give up that which is really the quality of sin and the trait of sin: habit. Is sin in the measure in which it has become a habit, a way of clinging to it and not acknowledging that there may be some kind of alternative or something different to it. So Dante goes on now, entertaining the arguments. "When I answered I began: 'Alas, how many sweet thoughts, how great desire, brought them to the woeful pass!'... Then 'Francesca, thy torments make me weep for grief and pity, but tell me, in the time of your sweet sighing how and by what occasion did love grant you to know your uncertain desires?' And she answered: 'There is no greater pain than to recall the happy time in misery and this thy teacher knows; but if thou has so great desire to know our love's first root," which is a way of almost-- even that metaphor of the root of love, the origin of love, she calls it the root of love as if the passion, her passion were the flower of love, "I shall tell as one may that weeps in telling. We read one day for pastime of Lancelot, how love constrained him. We were alone and had no misgiving. Many times that reading drew our eyes together and changed the color in our faces, but one point alone it was that mastered us; when we read for that the longed-for smile was kissed by so great a lover, he who never shall be parted from me, all trembling, kissed my mouth. A Galeotto was the book and he that wrote it. That day we read in it no farther.' While the one spirit said this, the other," Paolo-- whose name means "little" in Latin, as you know paulus, small, "wept so that for pity I swooned as if in death and dropped like a dead body." And that's the end of the canto. Well, we could say it's an amazing story and we will talk about a number of things. The first thing is that this is a scene represented through reading, a story of reading. You are aware of that, right? This is clearly, she reads, they read--he says that when one day they were reading for delight, that's probably part of the concerns that Dante has. How should we read if we read for delight? They read for delight. Is there some other way of reading? Is delight--clearly it's the constitutive elements of reading literally text, but is there something else that we could do along the way? What is our problem really? Let's continue with this idea of reading. She's reading the story of Lancelot, Lancelot and Guinevere, you do know the story, it's not the story of Chrétien de Troyes, but you could easily go on, if you want to write about Chrétien de Troyes Lancelot and Canto V, you can. Dante does refer to the stories of Chrétien de Troyes often in his theoretical works. The story of Lancelot is the story of adultery at court. Lancelot is the secret lover of the queen; clearly out of the desire and that says something about the nature of desire, to really supplant the king, Arthur. There's a triangle here at stake, a triangle of desire, and Francesca imitates this triangle and we'll talk about it in a moment. The story of Lancelot is a story of--let me go a little bit into that. It's the story of--like all the stories Chrétien de Troyes, they begin on the great feasts of Christianity. It's, I think, usually the Ascension, Easter, the Pentecost, one of the great feasts. And the heroes are sitting around boasting about themselves. Not one of them is doing anything heroic, but they all talk about how great they were. It's a little bit like the parodic version that you have of the battle of the argument between Ulysses and Ajax in the last book of the Metamorphosis where they talk about who is the hero worthy of inheriting the arms of the great Achilles. And they talk about not the present prowess now but what they were. In the story of Chrétien clearly the idea is that the heroic age is over and done with. And the whole romance goes on exploring, and pondering about that which the reasons why the heroic age may have come to an end. What it is, is that the secret love affair between Lancelot and Guinevere. The story starts, it goes on where they're sitting around drinking ale, and talking, a mysterious figure comes from the outside and kidnaps the queen. The knights who were sitting around don't move and everybody's expecting Lancelot to get up and go and rescue the queen, but he won't out of fear that if he were to impetuous, it would be the secret affair that he has with the queen would probably be discovered. That hesitation, that moral hesitation of Lancelot is really the cause of--it's the emblem of the falling from aristocratic virtues. There is now the intrusion of a time, a temporal wedge between the thought and the action, and then of course Lancelot would have to go on that famous cart of shame, exposed to the ridicule of the whole town, before he can go on really trying to rescue the queen. This is--but if you think about it, then Chrétien is already reflecting on the crisis of the city in terms of the private passion. Something is really gnawing at the heart of the city and it's really the question of desire. The inability to distinguish between the public and the private, the inability to separate somehow the two, or find some sort of a heart--threading the line between those two concerns. In Canto V this is really what Francesca does. Dante's exploring reading, so she is reading the text of Lancelot and lapses into an imitative strategy of reading. She wants to be like the heroine that she reads about. She refuses to take an interpretative distance from whatever specular image: she wants to feel like a queen. And she thinks of Paolo can be like Lancelot, and this is exactly what we call the mimetic quality. It's not my term, it's the term René Girard who has written about this question of the imitative structure of desire. Between us and the object of desire there is always the presence of a mediator, and in this case the mediator is Lancelot for Paolo, and it is Guinevere for Francesca. But there's more to this story. For instance, you cannot read the story without thinking about how Dante frames the experience of Francesca with the language of time. Do you see how many references there are to time? There's no greater grief than remembering happiness, a past happiness, and this, your doctor, meaning Virgil, knows very well. Then she starts talking about her adventure, "we were reading one day," remember, "that day we read no further." It's all about time, about the question of time as if an experience--so what is the problem with this idea of time? Why is Francesca understood? Why is her story represented in terms of time? In effect, I think Francesca want--there's one great passion that she has and her passion is to do away with time. She's expressing the desire that her happiness that last year, very briefly, a brief instance may really last an eternity, or maybe, just maybe, she may be expressing the wish that-- or the idea, the insight more than the wish, that one moment of happiness is well worth an eternity of pain. Or maybe she's just saying that it's not too bad that the love story I had only lasted the briefest possible time. At any rate, all this shows is that primarily Francesca not only abdicated choice and not only thought that her own will was powerless, vis-à-vis, the irresistible force of this transcendent idea of love, but above all, she has betrayed the order of necessity and time. Her passion violates the order of time. Above all, from this point of view, Dante goes on reflecting about his responsibilities of an author-- as an author when he's confronted with the reader. What have I done? What have I written? That what I write has been understood in a way that is not necessarily the one that he meant, the meaning that he meant to assign to the Vita nuova. These are some of the concerns and we can find some others. Let me just pass onto Canto VI, which is really not completely unlike what we have been describing here. Now we go into canto--Dante goes--that's another part of this other strategy. Whatever Dante has found out about passion, about desire, about this in the world of appetites and whatever he has decided about himself and the meaning that this may have for him as a poet, and that scene of fainting at the end, he will go on--this will become the premise for other concerns raised in Canto VI, which as you know, is a political canto. This is the strategy of Dante. Let me see, I found out certain things about me, my responsibility, I found some things about the disruptive quality of desire, vis-à-vis, the political order, now let me find out--let me see if-- let me find out how authentic this finding may be. Let me move into a public realm, so we go from the world of the court, the private world of Francesca, now to literally the world of the city, the world of Florence where we are still talking about incontinence in a different form: the question of gluttony and politics. And let's see--so he takes elements that he has already anticipated here in Canto V, the political, and goes on thinking about politics in Canto VI. Here we go then with Canto VI, the third circle, the gluttonous. "With return of my mind, with the return of my mind that was shut off when the piteous state of the two kinsfolk, which was quite confounded me with grief, new torments and new souls in torment I see about me, wherever I move and turn and set my gaze." I find first of all the presence of the word mind, in Italian it is mente, in line 1, very suggestive. We are dealing here now with bodies. Canto VI is all about bodies: it's all about gluttonous souls who were bodies who took care of the bodies. Dante uses as a counterpoint the question of mind, as if the sin of these bodies, the sin of these gluttonous has also been the sin of not thinking in terms of mind. The mind is a necessary counter, a necessary compliment to the presence of bodies. The word mind, of course as you know or in Italian-- in English we have mental, in Italian it's mente, Latin mens, really comes from the Latin for measure. The mind is that which measures things, the mind is that which gives a sense of the measure of even our own desires. The metaphor of mind appears throughout Canto VI. We are asked to think of that which is missing in this biological reflection, a reflection about the--what I call the biology of politics. Politics now reduced to the question of appetites of bodies. It's not--normally we have the pride of minds when we think about all the people who have whatever fantasies, whatever megalomanias, whatever desires, but mental above all when we talk about politics, but here it's really a question of politics in terms of the inexhaustible appetites of bodies. We are going to talk about politics and gluttony, politics and bodies. Dante here meets the figure that is presiding; the mythological figure that is presiding over this canto, this area of gluttony is the classical figure of the three-headed Cerberus, a way of hinting about the voraciousness, the many mouths of this monstrous animal. "Cerberus, a beast fierce and hideous" and so on. And we do know that the landscape is stinking under an endless rain; there are hints that this is really one of--some kind of repulsive form of waste and food. "The rain makes them howl like dogs, and the profane wretches often turn themselves, of one side making a shelter for the other. When Cerberus, the great worm perceived us, he opened his mouths and showed us the fangs, not one of his limbs keeping still and my Leader." And so on. "As the dog that yelps for greed and becomes quiet when it bites its food, being all absorbed and struggling to devour it, such became these foul visages of the demon Cerberus... We passed over the shades that were beaten down by the heavy rain, setting our feet on their emptiness which seemed real bodies." This is actually the great--the description and figuration of gluttony. Bodies that are always empty and they are empty now. They are punished to be empty, as empty forms; and they seem--they are not bodies, they seem real bodies. "They were all lying on the ground except one who sat up as soon as he saw us passing before him. 'O thou who art led through this hell,' he said to me, 'recall me if thou canst; thou wast begun before I was ended.'" Another little reference to birth, the birth of Dante and the death of--it's part of a cycle. There's no necessary connection between the three heads. The death of--the name is Ciacco, meaning a pig, that's the way he was surnamed in the streets of Florence and the death-- and the birth of the pilgrim. "I said to him, 'The anguish-- the anguish thou hast perhaps taken thee from my memory," and the word is mente, the mind, "so that I do not seem ever to have seen thee. Tell me who you are, put in a place of such misery and under such a penalty that, if any is greater, none is so loathsome.' And he said to me, 'thy city," we are talking about Florence, this is the politics of the city. "Thy city," it doesn't say our city, your city. He's already--Ciacco views himself as outside of it, not really occupying a place within the city, "which is so full of envy that already the sack runs over, held me within it in the bright life, when you citizens," once again the distance of Ciacco from the city of Florence, "called me Ciacco. For the damning fault of gluttony, as thou seest, I lie helpless in the rain; and in my misery I'm not alone, for all these are under the same penalty for the same fault.' And he said no more." Okay here I have to stop a little bit to tell you what-- something that you already caught of course, what the basic metaphor, what the basic conceit is in this canto, and it's the conceit of the city and the body. You--in the classical world you're used to the conceit between-- of the correlation between the soul and the city, but for Dante this is a soulless city. The only way to talk about is through this image which is very ancient, very Roman actually, the story of the city as a corporate, as a body, as a corporate structure. The image, some of you readers of Shakespeare you may remember your Coriolanus, where Coriolanus makes the same speech about the city and the body. But it really goes back to a historian of the classical world that Dante absolutely loves. He's not the only one, all the way Augustine is using-- the name is Livy who wrote this famous book about from-- about the--from the foundation of Rome, a Roman historian who tells the history of Rome. One of the stories he tells is that of the famous civil war in Rome: the civil war between the patricians and the plebeians. The plebeians, the workers, were so tired of what was happening in the city. They were doing all the work; that's the way they complained, but they had few of the pleasures coming from living in the city that they decided to secede. It is the famous secession whereby they go-- it's a kind of schism, they go on the-- they retreat on the Aventine Hill, one of the seven hills of Rome, and the patricians, the city is paralyzed as you can imagine, it's a strike, the patricians send one of their-- an emissary, a man by the name of Menenius to convince the plebeians to return to the city. Menenius manages to do this by telling the plebeians a famous fable which called, is still known as The fable of Menenius. What does he tell them? He said, look, the city's really like a body. When you have a body the hands work. Yes, it seems that the mouth enjoys and savors the great pleasures of foods and so on. It seems that the stomach can be full, but actually whatever they produce and take in and they ingest, they redistribute to the body, to the rest of the body, to the hands, the feet, etc. The city is like a body. That's the analogy. Between the corporate structure of the city, the idea that the city is a corporation and-- which by the way we carry on a reminder of this, how vital this is, we carry on a dime. I don't have a dime with me, but if you have a dime you can read e pluribus unum. And it says one body out of many limbs, out of many members, still an image that we carry. It's still a conceit that we have, right? The idea is that the city is like a body and plebeians are convinced and they go back to order and they recompose the order of the city. This is the fundamental structure here. I said something else which is really is going to--does Dante believe in the corporate structure of the city? Can it really hold together and I go on submitting to you that he no longer believes in this. If you--when you read the canto you will see that all the body parts are literally littering the city, they're all mentioned. The nails, the hands, the heart, the beard, the hair, etc., the mouth are sort of spread all over, and as if to imply the impossibility of constituting these body parts into an organic unified totality. There's another little issue here that is being raised and that I want to talk about before the end of the hour: the question of civil war and what Dante understands by civil war because Dante's political thought, the reality of his political thinking is always the civil war. Let me just give you some textual evidence and then we'll go on. "I answered him Ciacco, thy distress so weighs on me that it bids me weep. But tell me if thou canst, what the citizens of the divided city," this is now Florence, "shall come to and whether any there is just, and tell me the cause of such discord assailing it." An amazing image discord because it's a musical metaphor: accord, discord but it really comes from--it makes the 'heart,' that's where the word comes from; discord makes the heart the place, the receptacle where all the envy, all these jealousies that destroy the city are placed, are located. "He said to me, 'After a long strife this shall come to blood and the party of the rustics shall drive out the other with much offense; then by force of one who is now maneuvering," meaning the Pope, "that party is destined to fall." This is the Guelfs and Ghibellines that--within which the city is divided. "To fall within three years and the other to prevail, long holding its head high and keeping the first under grievous burdens, for all their tears and shame. Two men are just and are not heeded there. Pride, envy and avarice," these are the cause, these are "the sparks" he calls them. "'These are the causes that have set these hearts on fire.' Here, he made an end of his grievous words." And then Dante goes on literally evoking a street scene in Florence, goes on asking about some other characters in the city. "I would still learn from thee and I beg thee to grant me further speech. Farinata," he mentions about whom we shall see next Thursday in Canto X of Inferno, "Tegghiaio, men of such worth, lacopo Rusticucci, Arrigo, and Mosca and the rest whose minds were set on well-doing, tell me where they are and give me knowledge of them; for I'm pressed with a great desire to know whether they share in Heaven's sweetness, or the bitterness of Hell." I would like to point out to you the presence of this--of the language of gluttony throughout: sweetness, bitterness, pleasant, unpleasantness. This really runs through the canto and gives its--and links together gluttony and the politics. It's the body: you can see the body metaphor, but also these other experiences. What is happening here he asks, as Dante asks about these other famous Florentines, the "men of such worth"? He says, where are they? They achieved so much worth, so set on well-doing in the city and the English cannot quite render the ambiguity of the Italian. The ambiguity of the Italian is benfare which is really very difficult to translate, which you don't know if it means "doing well" or "doing good," and that impossibility of deciding what the sentence really means is exactly what Dante's dramatizing here. What he's dramatizing is the distance between human perspective, the judgments that we make as human beings, and the divine judgment on the dealings and doings of these famous people; the discrepancy between them here on earth when they judge one way, then the real--the reality of the worth and value of these other people that can be different. We are talking about--he's talking about them, the black souls, that is to say they're further down in the fire and different faults weigh them down to the depth. What an extraordinary metaphor, the weight, the burden of sin, but it's really an image that goes back, the gravity, the question of gravity. This is--we speak of civic gravity, but here it's a different kind of gravity. It's an idea of--it's an old idea. When you want to talk about the weight that we carry within us, the gravity we have within us, that gravity is love. The way of deciding, the way of understanding this line: there's a passage in the Confessions of Augustine, where Augustine says, that he wants to exemplify why some people go up, other people go down, and he says it's like the gravity of objects around us. A stone, you drop a stone and the stone goes down out of its own gravity, its own specific weight. A fire, he says, goes up out of its own specific weight. We are carried wherever our love carries us. We are--our love is our own gravity, inner gravity, and whether we go up or down, it depends according to the direction of our desires. Let me just go back to--this is to give you a sense of all the resonances of this canto, but at the heart of it all, there is the question of civil war. Between Guelfs and the Ghibellines, between patricians and plebeians: Dante sees the whole of history, Roman history, whether he is going to read Virgil, or will read Lucan, or he will read Statius, which actually deals not with Roman history in this great epic the Thebaid. He reads--he's really reading Greek history, the story of Oedipus and Eteocles and Polyneices. They view history from the point of view of the civil war. Let me just formulate the question of the political understanding Dante has. For those of you who may have read a little bit of Monarchia, for instance, which is the treatise about the desirable form of a universal confederation of states, under the one emperor: that's the grand vision that Dante has in Monarchia. He thinks about the needed unity of all states, a kind of sort--we could call it today, a confederation of states, very much patterned on the Roman Empire. The idea of the--in fact the Roman Empire becomes the model for this kind of unification. That's really what most of us think that Dante's political vision is. In effect, Dante sees history especially as--and it's kind of inevitable--a satanic form of civil war. So harsh is he going to be about the realities of the cities, that you really wonder how can he go on elaborating a theory of, a constructive theory, of politics. You see what I'm saying? Once you are so harsh about the reality of politics, then you really wonder how can one go around really thinking that politics can be necessary. It's necessary that you can explain, that it's somehow useful, that it's feasible. Where does it say--where does this understanding of Rome come to him? Dante does not really agree with Virgil. And Dante does not agree with Virgil's greatest critic who is Augustine in The City of God. For Virgil, Rome is the providential empire, an empire that can really bring about, unify the whole world. Augustine writes against Virgil and says, no because even Rome, as I just indicated to you a little earlier, even Rome is part of the history of violence. Dante comes along and pulls together within the Divine Comedy the question of Rome and the needed empire and the question of the civil war. What do they have in common? What is it that connects them? Dante's argument is the following: you, Virgil, are right in believing in the unity of all mankind, a Stoic idea that we all live in a cosmopolis, in a city which is the city of the world where we all find a place. And you, Augustine, are right in claiming that the empire is all built and based on the libido and lust. You're right; you are both right, and yet you are both wrong, precisely because you contradict each other. What Dante says to Augustine, if there is no empire, then we are living in a world of disorder and lawlessness. The empire becomes the necessary remedy to the evils of the civil war. The civil war is the condition where my own brother, my own neighbor, can become my own enemy. Augustine does not acknowledge the reality of civil war. To him, it's just empire and the empire is evil. And we'll finish with the famous line: what do I care who governs me, provided that they don't make me sin? It's the famous Christian response to the idea of the evil, the historical evil of empires. Let me retreat into myself and find within myself some kind of comfort and some kind of shelter. And Dante will respond to him, says no that's not enough because once you think that you have retreated into yourself then there is the reality of the civil war that will reach into you. What I have been explaining to you and I will stop because I want to talk about something else before we go, Canto VII. What I've been trying to explain to you is that the movement from Canto V to Canto VI of Inferno, it's a movement from the internal world of desires that seem to be so private and so personal. Then, I said, Dante has to go outside of himself to test, to find out what the authenticity is of what he has found out in Canto V. In Canto VI, the political canto will tell them there is no such comfort zone of one soul in the world, that the inner world is necessarily part of the outside world and the outside world will encroach upon it and it will enter one's own inner world. The terms for this kind of movement between the inner and the outer are really Virgil and Augustine. Virgil with the idea of the defense of the empire, Augustine with his undermining of the notion of the necessity of the empire. Dante will go on harmonizing the two visions. He will endorse the idea of the empire, aware that that's the only possible best response to the tragedy of civil wars. Let me say just a few things about Canto VII and then I'll give you a chance to ask some questions, there should be two or three minutes for questions. Canto VII also is a canto that can be read symmetrically with the other Canto VII of the Divine Comedy, Purgatorio VII, just as Canto VI. I neglected to mention it, but Canto VI of Inferno is about the city and politics; Canto VI of Purgatorio about the nation; Canto VI of Paradise about the empire, so they're really connected; the same thing with Canto VII. This is the only canto that's not individualized sinners. He meets avaricious, the avaricious and the prodigals, and they are sort of taken in a kind of-- they have no--there's no individual figuration for them. It is as if this became a kind of an anonymous, therefore a more collective kind of problem, avaricious and prodigality, which he represents in terms of the counter movement of Scylla and Charybdis, and here we have the great figuration of Fortune. You remember as I call her the Vanna White of the time, the lady who is at the Wheel of Fortune, turning blindfolded and let me say something about this figuration. Dante describes it as--what is it? It's a great--an idea that I--what is it about the avaricious and the prodigals who could turn around, so one against the other, how can this be possible? What is--why are we so attached to the things of the world? And then Dante goes on explaining on Canto VII, lines 80 and following, he will say, "He ordained them," He meaning God, "for them for worldly splendours, a general minister and guide who should in due time change vain well from race to race, and from one to another blood beyond the prevention of human wits, so that one race rules and another languishes according to her sentence... She foresees, judges, and maintains her kingdom as the other heavenly powers do theirs. Her changes have no respite. Necessity makes her swift, so fast men come to take their turn. This is she who is so reviled," meaning Fortune, "by the very men that should give her praise, laying on her wrongful blame and ill repute. But she is blest and does not hear it. Happy with the other primal creatures she turns her sphere and rejoices in her bliss." It's Fortune at the wheel, but it's a figuration that in many ways it needs some explaining. How can Dante believe in God as Fortuna? How can he go on talking about this pagan deity, she is Roman deity, Lady Luck, how is he doing this? How can that--do you see how he lives in a world of providentiality where there is well--and he does say that Fortune is an intelligence of God. That is to say, not the--though she's blindfolded; there is also a kind of--there's some criteria, there is an intelligence, there is a will, and a meditation behind it. What it means is that what is up will inevitably turn down to go down, it's an endless rotation of fortune. In a certain way when you are down you only--the only--it's the best time to be at, because you only get to--you can only go up. We are always though on the--precariously poised on this--on the curve. We are never quite stable in our own achievements. How can Dante relate this fortune--idea of fortune to the providential scheme that he--that regulates and shapes his own vision? What I would have to tell you is that the--two things. The first thing is that as you see, Canto VII begins with an illusion to the great war in Heaven. The angels, the primal struggle that disrupted the order of the cosmos, in other words, Fortune is for Him the divinity that rules over the world, this sublunary world of generation of corruption. That is to say, she is a minister within the world of the fall, first thing. There is still a fallen world and that's how perception of all the changes that take place. And the other thing is, that Dante is intimating that the only way to conquer Fortune is to really give up. It's a kind of mystical idea. Mystical in the sense of a spiritual idea, that is, give up the attachment to the things of this world. Let's stop here with the briefest summary of Canto VII. Let me see if there are questions about some of the weighty issues that I raised in Canto V and VI. And there is much more that we can say, but let me, if you want to ask questions and maybe I can qualify things that were left in the background. Please? Student: In Canto V, >? Prof: The question is a very good question, what is the significance of Francesca doing all the talking and not Paolo, I guess. I take the significance is that this is-- to me is that this is a canto where Dante understands some of the elements that he had put forth in the Vita nuova. You remember where we discussed the Vita nuova? There I indicated that the great poem, "Women who have intellect of love," where the-- he discovers that they are the interlocutors about love. Not only are they the interlocutors about love, there are also those--the privilege of interlocutors because they know how to combine; because they understand the necessary independence of intellect and love. They are not two separate entities, they are not two separate aspects, and therefore, now he has Francesca as a woman who can become indeed his own interlocutor. That's one aspect, the other one is that medieval romances had made this extraordinary discovery and I think that it's the most revolutionary change that has taken place in the consciousness of-- in the imagination of the--in the Western world in modern times. That is to say, before it became a sociological issue, before it becomes a philosophical problem, the dignity and worth of the woman was already retrieved and vindicated by romances. It's there that the woman becomes either the figure in charge or the partner, or friend of the man. Does that answer your question? Student: > Prof: By the way the answer was yes. That could not be picked up by the video. Other questions? Well, okay thank you, we'll see you next time with Canto IX, X, and XI, I guess. |
english_literature_lectures | Mark_Steel_Sylvia_Pankhurst_pt_4.txt | but relations between the panker reached a new low when at the age of 45 Sylvia became an unmarried mother she clearly took Delight in the annoyance this course of the conservative wing of the family especially as she sold the story to the news of the world from the obscurity in which she has lived since the memorable days of the militant saffr jetes Miss Sylvia panker Springs a new sensation perhaps it was a coincidence but a few weeks later emiline collapsed and died and even that didn't stop the antagonism at the memorial service one of the speakers was Stanley Baldwin leader of the conservatives and Sylvia was excluded from the arrangements and then came an extraordinary chapter in the history of the women's movement which began when musolini invaded Ethiopia in 1936 as Sylvia had been one of the few British socialists of her generation to oppose the philosophy of Empire Harry quelch one of the leading early figures in the labor movement had written Zulus belonged to a different evolutionary Epoch it would be better if they all stayed in their own countries Sylvia led the campaign in Britain to impose sanctions against the Italians and travel to Ethiopia to offer support to the extent that officials at the British Embassy wrote this confounded Pankhurst woman is more fuzzy wuzzy than the fuzzy wuzzies the Ethiopian Emperor Hy Cassi fled to Britain living in B where he became friends with Sylvia and he would spend his holidays in worthy which I find extraordinary I wonder if this footage of him going up to a policeman and saying excuse ban we lose Ras gy ghoul once the second world war started Britain was at war with Italy so they sent the force to Ethiopia to drive melini out but then they occupied the place themselves so Sylvia continued to campaign for Independence to the extent that Churchill kept a file called how to answer Letters From Miss Sylvia bankers by the end of the war highly Cassi had been reinstated as Emperor and he invited Sylvia to visit as his advisor on policies for women you can't help thinking that when the king of the Rastafarians invites one of the Century's most famous feminists to advise him on policies for women there must be some awkward moments such as um this pit here about women being unclean during their periods anything wrong with that one of Sylvia's campaigns was against the BBC each night during the war the Home Service would play the national anthems of the Allied countries but not that of Ethiopia so she led a campaign that forced the BBC to add it to the nightly an Sylvia went to Adis abar and was appalled to find that a ban on blacks entering certain areas was still being operated by the British so from her Suburban home she kept on campaigning and when a new constitution was agreed ending the occupation and granting the vote to everyone over 21 she said the victory of Ethiopia is the most satisfactory achievement I have seen at the age of 77 her beloved Sylvia died after which Sylvia wept for days Sylvia and Silvio's 30 years here is marked only by this pacifist Monument that they erected themselves in 1935 and as you could see the good people of Woodford tend to it lovingly on a daily basis with wax teacup po barnish window lean nothing's too much trouble when Sylvia received an invitation to work with the emperor Hy Cassi she went with her son to live in Ethiopia and then in 1960 at the age of 78 she died High Cassi flew to Adis Saar to order a state funeral at which he stood for the whole 2 hours and across the East End of London loads of BLS stood by the side of the road going whatever I say about s she never threw bricks at her own now a large part of the women's movement claims that Jordan is a role model or that Diana was a modern feminist some even say she was a republican though given that her main ambition was for her son to become king I'd suggest that puts her on the moderate wing of the Republican movement and politicians blame the ever decreasing number of people who bother to vote on apathy but how can it be apathy given that we've recently seen the largest demonstrations in British history that's not apathy it's because f few and fewer people feel any connection between themselves and the politicians it's like if Cliff Richard was doing a concert three doors from my house I wouldn't go that wouldn't be apathy I wouldn't be sat at home going oh we'll be doing bachelor boy in a minute but I can't be bothered it's willful [Applause] nonparticipation syvia panker understood that the vote was worth campaigning for because it raised the status of women it was a victory against the attitudes of those who would never allow women a say until they were bigger than men however bizar her final days Sylvia panker lasted the course that so few managed never for a minute embodying passionless dullness throughout her entire life she was Intrepid spirited and interested oh what's the point what's the point of coming here stopy sty all right the chairman of bad hair a on BBC 2 Donald Trump puts The Apprentice USA in the firing line next fight for your right to participate in the mark Ste lectures the website is at open 2.net [Music] |
english_literature_lectures | Frieze_Lecture_The_life_and_work_of_Charles_Dickens_Part_1.txt | let me um tell you u a few things about um my interest in Jens and how I got there I'm a Shakespearean uh not uh 19th century and so I'm here as the setup man I guess to uh provide host H for other people to put their scholarship in um so the one one of the things I'm going to do is simply to give you an overview of Dickens his work and his reputation but when I was doing this I sort of got U sidetracked because I began to realize that Dickens is more than an author Dickens is an institution Dickens is a cottage industry uh and that interested me because so was Shakespeare um a lot of people criticize uh Stratford uh England for having made such a big deal of William Shakespeare and uh to say that that Shakespeare is also not just the writer but he's become this great institutional name and I think the same thing is true of of uh Dickens the only other writer that I think of immediately would be uh Mark way who uh became a kind of an institution and like Charles Dickens had a major role in making himself an institution um that people will still uh respond to if you will um I'm not sure whether there is another room up above us uh but if not then this is the room that was the children's Library when I'm a lifelong me uh resident of Rock Island and this was the children's library and I spent a lot of time here I spent enough time here that I can actually tell you this is where the barbar of the Elephant books were over here was U action biographies on kit Carlson and the baby Crocket right here was flon bur's books which I adored and over there uh when I got a little bit bigger was Anna bring Gables and Rebecca Sunny Brook Farm I can still when I walk into this room and it's empty it seems like a great empty mouth uh because when I was here it was full of friends uh I used to spend sometime Saturday mornings when my mother got my sister and me out of her hair and dued us off at the library and we spend our time absolutely having a wonderful time right here I first met Dickens however not in this library but in a series of books that maybe some of you know of olive Boe Miller's the book house how many people have ever even heard of olive wer Miller and the book house good uh I I'm surprised because we're all about the right age to know the book house and they were sold up in Chicago and I don't want to say anything more about that but one of the things that Olive Booker and did was to summarize um a important section of uh uh the uh Copperfield Books and um I read those for the first time in a volume which is part of the Set uh which was the the volume was called um a fairy stairs or something like that and I was enjoying them all when I came to David Copperfield it was too boyish for me I never quite liked it and so I sort of Dickens was gone for me for a while because he didn't have the kinds of things and as a little girl I was interested in at that uh Point um then uh I went through and forgive me for doing this sort of little bit of remembrance of how I got to dick but it's actually going to connect with something I'm going do with end um after uh my first initial bad uh experience with Dickens I spent Summers and my high school years and my early college Years just reading uh all sorts of novels in the summer and everything that was canonical as you could ever imagine and that was where I began uh reading uh more things the Oliver Twist and the kinds of things Great Expectations that you would expect when I went to college uh Dickens was not mentioned uh I do not believe that uh any Dickens work was taught at all when I was in college and when I went to graduate school and took 19th century novels we read Dickens but we read Bleak House now I don't know if you have ever heard of blee house but if you're starting Dickens don't start first of all it's that big secondly it has to do I think I'm supposed to be talking into this can you hear me okay um it has to do with the chery courts in England where uh Dickens actually knew quite a lot about that and he spent uh just pages and Pages explaining how things were done in the chery cord and it's just much too much detail I think for most people uh it the the problem with the chy court uh was that it took so long to get a suit through the courts that people kept dying out before their you know things actually came and became part of what resolve so we read ble house and it was a challenge to get through it and you know I was rushing with graduate school and so I don't have very much of a memory of blee house but I've never been sort of inclined to try it again but maybe in retirement that's what I'll do um what I want to do well uh then one more step in this when I came back to augustan um there was to my meman R and uh check down this but I don't remember seeing many people who were interested in teaching Dickens um in the years that I was a since I've been there for 45 years now that's a lot of time and I just do not remember uh many courses that had Dickens as a figure so when I uh went to do this I asked I took up one of my classes that had a fair fair number of kids in it and I said to them do you know who Dickens is and I got the he said digant of course we know who di is so then I said um do you know ever heard of el twist oh have you ever heard of dve David celd ask what about Christmas car yeah and then I said how many of you have read a book my T and out of 25 people I got four now these are English major types but I got absolutely four and when I checked on it turned out that that that all four of them had read Oliver Twist after they had seen the musical so uh there there's something interesting going on here and I I want to pursue that in a minute but I wanted to first of all just tell you some very basic things about about Charles Dickens it was born in 1812 and that's one of those dates that that I know because of the War of 1812 although I can never remember what England and the United States were fighting about I know that there was a more them um and uh at at that same time Napoleon was uh invading Russia so that's the kind of stuff that was going on uh when uh when Dickens was born his father was a Navy Clerk and he was I don't think he was exactly a nille because he had lots of different jobs but he doesn't ever seem to have been able to to handle what came in in relation to what went out and uh you he he spent a lot of time in debt and in 19th century England if you were in debt you very and you were in prison and in fact his father was in prison at the Marshall sea prison uh and you know it was none of those things you're in prisoned so you can pay your debt but you can't pay your debt so you can get out if it's just uh Catch 22 excuse me kind of situation um when somebody was in prison at that time uh they sometimes brought the family with them I mean that was just part of the way the prison system worked and so he brought his D father brought his wife and a couple of the kids Jens however was sent to a friend's house and he was put to work this is a boy who was 12 years old at this time he was taken out of school and uh put into service at a blacking Factory uh that's where they made uh shoe polish it sounds terribly unhealthy uh as a matter of fact for a 12-year-old child but that's essentially what he did until somehow or other his father's St was repaid and he was returned to school he never went to the university which is a fairly significant thing uh a lot of his training as a writer was uh simply practicing and reading what other people actually um did as a young man he became a reporter and so he did a lot of practice of writing uh and particularly he wrote uh for the court go when he did ble house I think he just kep all that court stuff that he wanted to dump uh on people when he was running that and any R he wrote Under the pseudonym Bas and some of you may associate Dickens with the the name Bas it was a a name that he gave to his uh that a name that his brother had as uh when he was growing excuse me when he was growing up um early Dickens is one of the earliest successful novelists that I know of he was 24 years old when pck papers was published and he was an instant success people were crazy about that particular book and it was followed within a year or two by Oliver Twist and that solidified the reputation that that uh uh that Dickens had among the leaders uh of uh London um as time went on his reputation went up and down a little bit and I'll talk about that but uh for the most part at least among readers in England and in London uh he continued to be to have the status something like what we would give a rockar uh people uh um went to his lectures people attempted to shake his hand to touch him I mean it was a it was a very very strange situation that I think on the one hand he he enjoyed and on the other hand uh his privacy was pretty well destroyed um the first of his knows this p with papers is a very Loosey Loosey thing it's got a its structure is very esotic it's kind of travels of Mr pck and his uh his servant whose name was Weller uh in that and I'm not going to March through all 14 15 novels I'm going to pick out two or three that I want you to to sort of keep in mind again because of what I'm going to do and try to do a little bit later but um one of the things that uh Dickens developed in pigin Pap besides his ability to create characters which everybody in here I'm I'm sure is aware of um was a a way of identifying characters by verbal tags um a lot of people who look down on uh Dickens as a novelist would argue that his characters are not uh gents are not complex and that too often he relies Upon A a verbal tag I don't know whether you know what I mean by this but a typical way that somebody speaks and they say the same thing and every time do it you can identify that that in the case of of Weller uh he what he did was to create bizarre simile I mean just bizarre simile uh and you just think to yourself oh that's well or it just used again and again and again and whether you like that or not is really a matter of taste uh it seems to me um Oliver Twist um was a uh book that was very interested in a child labor Dickens was a social critic you're going to hear a lot about this I think in everything that comes later but he certainly was a a social critic aware of uh terrible things that were going on in England child child exploitation child labor laws uh problems with uh debt and with the penal system uh with the courts and so forth so he's very much aware of this and most of his books not all but most of his books in this middle period are concerned with that in one way or another but concerned with it in a comic context so that you could care about these things but you could still still have a laugh at the beginning and things would turn out okay at the end um Nicholas nickelby is another important book from this period and into a little longer than some of the rest of them was a santire on boarding school and children and you'll notice that children and child characters become a a sort of a major theme in a lot of what we're looking out here um Dickens published serially and that was not unusual in the 19th century what it meant is that instead of publishing the book when the author finished it um there you would get a little I don't know a pamphlet about this I'm going to show you a pictures of that in a um in a minute but a little pamphlet that would contain 60 Pages apparently of the of the novel and you could pay a shilling for that and take it home and then you would uh when the next uh series came out you pay another Shilling and you get the second part of it and you would go on perhaps through as many as 12 different sets and then eventually you would have the whole book now there was a lot of um useful aspects to this serial publication the one usel aspect is from the point of view of the writer you don't have to have the thing the whole thing done before you could begin realizing uh money for what your efforts the from the point of view of the reader this was also useful because there was uh if you it might be expensive uh as much as a pound for example to buy a a fancy book of Dickens but you could put a shilling down this month and a shilling down two months later you see and over time people could begin to accumulate a book that they might not have wanted to put money into all at once so serial commentation uh Zachary did it and many many other other uh of of people from the time then after a book became famous what it was a common thing to do was to pull all was to put the books together as a set and again I sort of encourage you to think about what maybe what your own experience was with this from about 19 1850 on in England and in America uh it was very very common for Publishers to try to make money by uh sets of books in in in uh intended for the the bookcase I can remember in my own home we had a set of Zachary I'm not sure many people ever read Zachary but we had a set of Zary um there was a set of Shakespeare and there was a set of dickets and my my dad liked dickets and so they were actually readed but uh it was one of the things that you did when you reached a certain level of education and a certain level of income that you could buy some of these beautiful books many of them had gold stamping on on them and uh perhaps with a veryify picture of the author and so forth uh as book designs they weren't much because they were designed by the printer rather than uh someone outside but these books were expensive and uh the the person who wrote the books as long as they lived could realize significant uh money from from that particular particular thing in all of this uh Dickens managed to marry he married a woman named Catherine hogar um and uh he had 10 children uh uh I mentioned that only because apart from you know the appropriate number of children for a vorian family uh it was uh also true that in uh the' 60s 1860s very close to the end of his life which was in 18 uh 1870 um he we would say had an affair and I'm not sure that is the quite the right way to to talk about it he discovered a soulmate she was an actress oh my gosh um she was an actress by the name of Ellen kernin and the intensity of the their relationship the intimacy on uh not physical level but the intimacy in terms of um soulmate kind of thing was really very important still to this day people don't know whether it was actually a physical Affair or not but the gossip very clearly said that it was and the problem here is that it's Solly uh the reputation of Di uh just at the time that uh he had this a relationship with Ellen kit he was beginning to make his tour of America which some of you may know he he came twice to America with very different kind of responses on to two separate occasions but he uh tried to arrange for Ellen kin to go on another ship to America to meet him there so that she would be there and his wife had had it and so she would out with the children and from that point on I think they um contact were in contact with each other maybe three times by letter having to do with business so it was for all and purposes the end of his marriage and also um a a kind of an unfortunate uh twist to the direction that the kind of lionization of of Dickens uh to um I want to mention two uh other books and that one is data field and the other is great expectations and I have to think that great expectations is its finest work I think it has the best kind of balance of character and and structure and plotting and uh interest in issues but not being buried in those issues and it is what's called a buildo Roman which is a German word for a Coming of Age novel a lots of coming of age novels and that's one of the reasons that children are so or young people are sort of to historically have been drawn to it because he has all of these children and in the case of Great Expectations and David coer field you have this coming of age somebody uh uh acquiring a kind of maturity that they would not have at the very beginning and both of those in that sense are uh quite the lightful books um I I I want to I think I have to kill him off now that in uh 18 uh 1870 Dickens died and I think his death is rather interesting um because he one of the things he loathed about his contemporary Society was uh the Victorian way of death and I don't know what you know about that but I remember seeing a Victorian picture once of a little boy in a white casket and you maybe some of you have seen these little lockets that have hair in them and various Mark Twain takes that whole thing apart as well but um what uh I think that uh that he was interested in is that he was not going to be buried in the Victorian way uh and so he actually had a a Midnight Burial uh he originally just wanted to be buried in a little churchyard and people wouldn't allow him to be buried in the little churchard so he was buried in ports Carter at West Minister Abby but he was uh buried at at at night with a very simple ceremony and nobody around except you know a very few people that had been his friends um and then the next day they left his grave open with the casket in it and then the people could come and dump flowers and you know do whatever they want and then they go so I I I kind of like that about him uh that that for all that uh other people uh admired him and saw him as the greatest living writer and over he was modest enough in his final days and in the you know the final uh Arrangements that uh he he simply did not want the grand State funeral that everybody would have thought that he would P so that's a little bit of uh what I think uh might want to say about Dickens and what I want to do now is to turn to talking about his reputation and I'm interested in this in a couple of different ways on the one hand I'm interested in his critical reputation that is sort of how critics have seen him then and how critics seen him now and in between uh but I'm also interested in uh readerly criticism and what I mean by that the people who uh admire someone uh someone's work and buy the books and read them uh and and so there there's a difference between professional critics view of of diens and the kind of view that ordinary people who read Dickens have had all along and those two things sometimes go in in in different directions in his own day um dickens's reputation fluctuated as I've already suggested um most of the time the people who bought his books stay with him and adored him but from very early on uh people who wrote reviews and people who were whose ideas about Dickens about writers were uh admired uh had their doubts about uh Dickens for example um a uh let's see George Meredith who I think some of you may know 19th century writer George Meredith says about Dickens and this is just shortly after Dickens had died that he was a lightweight his plots were implausible he was aesthetically crude a character caricaturist rather than a creator of character and with little uh intellectual challenge those things are the things that when people want to uh uh just as my students would say when people want to character uh this is this one of more of these things keeps coming up however there is still this resiliency uh of uh a reading public uh that is very very important uh and I guess uh you know you pay your money and you take your choice and it's it's the people who put money in the author's pocket that become uh important no matter what other people say uh F Le he's a sort of a big gun 20th century uh uh critic talking about the novel said about uh Jens the adult mind doesn't as a rule find in di a challenge to an unusual and sustained seriousness right at the same time you say this there were people for whom the characters in D were so alive that they moved outside of the novel and this is very rare there are not very many times that I can think of among writers where the characters in the piece become freestanding uh characters that people know quite apart from the book one example would be Shakespeare where you know people who don't read Shakespeare can you recognize Hamlet and the same thing with a lot of other characters but uh J like this too um and I'm going to show you in a minute uh the the way that actually uh worked out but you have this constant tension between the sort of academic uh um attitudes toward Dickens and the public uh popular attitude um it is not fair to say however that all academics have this attitude toward J uh there is a thing called right now called the Dickens project and it's a little bit like the Mark Twain project s Clon project where it's a Consortium of major universities Yale P places that are active in this kind of thing uh that have done all sorts of things in publishing his letters and uh you know encouraging scholarly work on N so it certainly is not true that he has been deserted by Academia even though in my own own experience as I told you uh it doesn't suggest that people are are learning about thickens from the the schools uh it's not coming through the schools so where is it and that's where I'm going to start showing my little pictures um I call this the selling of Charles Dickens and I don't mean that in a bad way I really want to emphasize that um but U dickets was a businessman it's rare that you get artists who are also have business Al and uh he was one of those people who could both create uh and write but also knew how to turn that writing into something that people would uh would buy and that would make him a a reasonably uh rich man um the from the very earliest time Dickens was adapted for the stage and I mean that while he was still living writing these books his book his work was being adapted for the state so if we think of things like the uh Nicholas nickelby uh musical or Oliver or whatever it is if we think of that as sort of being a Latter Day adaptation it was in fact something that was going on from the very beginning and in uh continued on through the 19th Cent the late 19th century into the 20th century so my first point is that Dickens has been sold to the American public to the Indian public by the stage worthiness of the books and what I mean by stage worthiness is that uh in order for something to work on the stage you need Vivid characters you see and that's one of the things that he could do and if those characters were a little thin according to the critics when you put a live talented actor into that role a lot of that thinness uh sort of melts away so it's important for us to see that there is has been and is a continuing tradition of awareness of Dickens through theatrical performances and I'll show you some pictures from those in just a minute uh sometimes they've been serious plays sometimes they've been musicals but there they are in our own the next thing that I want to say about this is that it really helped Dickens that he was a writer uh for children or that he was in understood to be a writer for children all of those young uh Heroes would encourage children to read the books and so you find even though he's not in uh perhaps in the curriculum of many schools a there are plenty of people who still have found their way to Dickens uh and have been delighted by the characters and uh the The Fairly uh rapid moving loss if you will here's the point that I wanted to ultimately get at here how did Dickens come to be Dickens and what I mean by that I can ask the same thing about how did Shakespeare become Shakespeare and what I mean is how did a living writer with strengths and with foibles become a cultural institution that moved over the centuries and that we still uh that still exists and my argument would be that although part of it is the quality of his work and I tried to address that um there is also an element of selling uh of business uh of making a business of writing that you can see in Dickens from his earliest time um he regularly gave lectures and readings and his readings were extremely popular what he would do is to take a scene like the May I have some more please from Oliver Twist if you happen to know that you'll realize that that's a very that's a very uh moving kind of moment and he he would take these scenes and then he would read them dramatically he would give dramatic readings and he was pretty good at this people flocked to not only buy his books but then to uh hear his readings he did a version of um the uh uh his of Christmas Carol that was shortened enough for him to do it in one reading so there is a a whole Dickens out there do you see that is part of the culture in a way that does not simply involve uh somebody buying a book but his and reading it the next thing that I want to say is that uh there is an odd way in which Dickens came to represent Victorian England um if you can say about Shakespeare the age of Shakespeare and everybody knows we're talking about the late 60s C you can do exactly the same thing with Dickens the age of Dickens is going to be from about the uh 1830s through right until the time he died and and wrote that that Ed Dro which was never completed um so it it certainly is a a part he he was a part of London um there are plenty of books out there and I googled this in uh uh Amazon when I I tried looking through the Amazon there are books talks about Dickens London it's a fascinating idea if you think about it do you see it's not just Dickens but Dickens represents a particular way that people live and so the idea of him becoming a representative of an age is a rather important part of the selling of Dickens you can still read a book about the age of Dickens and never read one of his books and then it would still be kind of an interesting thing next point I want to make about this business of selling uh selling uh Dickens is that he became associated with Christmas uh that sounds like a trivial thing it is not a trivial thing um I read one person who says that Christmas as we know it and I this was an Englishman so I think he metant Christmas as celebrated in England Christmas as we know it would have been different if it had not been for for Dickens the popularity of the Christmas Carol and with it Christmas Customs the Gathering of people under certain circumstances the even the kind of food there are a couple of different Dickens cookbooks Christmas cookbooks that kind of thing do you see the point that I'm trying to make that there is a Dickens out there that is different from uh just simply the kind of thing that we would say if we were talking about thater or if we were talking about something else and that world uh Dickens began to cultivate even while he was still alive and clever businessmen have continued to do that right off into the 20th century I I I guess I would say that I think the other person who did this was Mark Twain um that Mark Twain was really able to sell himself uh as a lecturer I mean that that white suit that he wore his white hair you see and he became very much uh a cause C in uh in America uh and was had an important role in the publishing and distribution of his work so so I I I guess that's what I wanted to get out here uh right now I want to have you see some pictures and some of them will pick up some things that I have to whoa to go back see that handsome man um um I put this picture here because it is so on di I mean this is a very handsome fellow uh I always think of Dickens as a sort of uh what the author's cards Dickens um which is sort of older Dickens but he was not really showing through very well is it the lights we'll get it sorry well I open my this is a a completely lovely painting in full color and I have not the Fest notion why it's not coming through but nevertheless here we are uh this is the Dickens that that we're more concerned with I think the one in the corner is the author is the author's C dick by the way that notion of the author's car is part of the merchandising isn't it I mean how many of us have learned about 19th century novels because we played author's cards when we were kids and you could go write the most famous of his books Pi what papers uh uh Oliver Twist David Copperfield and I think the fourth one is Great Expectations I mean those are really the most important of books and they're rather they were also hard author's um this also is all washed out this is actually a real uh uh nice color of green and this was the way that the books were his books were serially published and in fact one man made the comment that on the day that Mr Mr Dickens uh serial book came out that the street was papered green uh suggesting that everybody was in a line there ready to pick up their their particular uh version of of the book now I'm going to say something else that I may offend you with and that is that one of the elements that has been most important in preserving the reputation of Dickens and other people is classic Comics uh I sometimes ask my students uh about whether they have certain kind of information and whether they got it from classic comics and um more more earlier on than right now but the the classic comic had a major role in uh establishing Dickens reputation these are some very very early ones I guess yes I tried to read the number I couldn't but maybe something like 197 10 something like that the one on the side is uncomfortably close to my own childhood but maybe a little bit earlier and this one is a much more recent classic comic and you see that the Oliver Twist there's other classic Comics that I found on um what great expectations and t two cities so the the comics have been a force in uh establishing his reputation I do believe that some students will read the classics and go on the classic comic and later in life go on and read uh the book itself many will not but there you are now the new thing uh is the notion of the graphic novel I don't know how much you know about that but graphic novels are the new thing and some of them are absolutely stunning they are adaptations of Classics but instead of trying to be very very true to the originals um there is a more what we might call editorial space in the comics and uh the the way in which it's presented in terms of the way the book is designed is part of the appeal of classic com they're intended for students but I find all the time that my college uh students have read people will bring me classic uh the um uh Graphics uh novel on the Tempest oh you got to see this and for a long while I carried them in my in the back of my car and never looked at them and when I finally got around to looking at them I'm stunned I I thought they were absolutely marvelous the point I'm making again is that things besides just the reading of the no maintain the reputation and the currency of the author uh and I got some more bizarre things than that you want here are some big little books and some advertisements for films and Dickens is one of the most frequently fil there were something like I think I read something like 600 different films from the very beginning at the beginning of the 19 of the 20th century to now of different Dickens uh Dickens works that's an enormous an enormous number uh that's WC Fields playing Mr mver uh you see him there and there's Oliver uh twist will be pathetic um I put this in here because I like theater and I thought this looks like a play I'm dying to see uh it's it's Tale of Two Cities and you can see what a a really clever uh director producer can do with a uh with a story when they put it on the stage that the visual of this is just absolutely magnificant and if you know 2 cities it's about the French Revolution you can see F caught here by the stunning color uh Oliver many people know uh great expectations of the Utah Shakespeare Festival did a musical version that was fairly successful all right now and this is my final uh point I want to say that Dickens exist as objects uh as well as books and I want you to see a little bit of the variety of objects the first set of objects are very much what you would expect the bust of of Jens and there's busts of a lot of sort of famous 19th cury and then the ubiquitous uh bookends of the author uh uh here looking like I [Music] think but these are all different bookends that you can even as we speak buy on eBay and then this is called something like project 56 and it is the beginning of Collectibles uh what it is is a Victorian Village that have all of these marvelous little uh uh little images of houses and they started as houses associated with Dickens and then they've got bigger and bigger and bigger and there are literally hundreds right now you see that lovely uh little um uh piece on the there the uh on the on the other side is uh a series of uh characters that you could use to play out um The Christmas girl these are all Christmas girl so if you want to collect things you can collect uh the kenana and a lot of it is going to include Villages and imaginary uh London set so there's tons of this stuff available and then World gold got into the act and not just rton but Wedgewood and a lot of other of the great Potters of of England and began creating images of Defan characters and that really solidifies the point that I said that the characters jump out of the book and they become sort of characters in their own right victim is on the side there with his ink and his uh his feather but everybody who knows anything about ticks will see that that is Oliver uh Oliver Twist with his bowl asking for more please and it's really a very handsome piece of of porcelain as a matter of fact not all of them are handsome a lot of them are really chinsy uh two here the Aral Dodger a very nice a very nice Ro doton and on the other side and you I bet you've seen these in uh in cataloges now where they show all the different writers in these little pot belly uh images where you can take the head off and so forth you can get one sh you can get one and soth but definitely you can get one those Dickens uh here's just more beautiful porcelain on the subject of Dickens there's a water uh a water uh jug and a a fruit bowl and this is another water jug and you know it would be fun to play the game of can you tell who that is um some of these are really quite delightful um look about these are all tozian things that the bu now those are very beautiful you can go now to tacky uh everybody's favorite the platter oforal platter on the wall um the one down the ones down in the corner are flow blue and are made by Major uh English uh parcela maker the one up above there looks to me very much like something that I remember getting from uh what Eagle supermarket anyway there's a whole set of those associated with tick pla yeah everything from wood down below to some kind of rosin at the top and this is my example of that's the wor A Dickens snowall snow snowall globe a set of Dickens demit spoons and the Dickens ordinance forry so guess what I want to say is that I'd like you to give some thought to how you know about tickets and how you got to to be here for this occasion you see it would be interesting to know and I won't embarrass anybody how many people in here have actually read Dan um that's pretty good but you AR exactly your average bear gr so I think that uh but but we all came to Dickens by strange Circ ous Roots uh in I I said that my first reading of Dickens was in the olive Boer Miller book but that isn't quite true my mother had on a a table there a series of Royal doton uh figurines uh uh the uh I can't remember I know Weller was there I know fat lady and I'm not quite sure exactly who she was but I remember asking my mother what are those and she said oh those are Dickens so now my first introduction to it is an object that's dick and now I've got to find out later who this man is and how it goes so I guess uh a is alive and well um the guardian in England is has taken charge of the 200th anniversary celebration and they have an incredible array of things everything from a wreath to uh scholarly conferences to uh uh dinners that uh you know to celebrate Dickens and so forth and so Dickens is alive and well both in the objects of our po popular culture and also I think uh in the minds and hearts of people who haven't read any of his books thank you Dr Young would you mind taking the question I yes sir when we were when I was growing up this is my boss when I was growing up everybody I knew because we had the Scot for with reader had to read both and I think in this order Taylor two cities and then David C yes is that not done anymore no appar not no and you know part of it is that there's so many so much new youth fiction that has just got nudged right out and uh and I mean what that's going to mean in the future realistically I don't know because kids are not coming to college having read uh very many of those but those they have they have they don't come anymore reading any of the no asked what they had English class never they do a little bit better with Mark Wayne and and with American Writers because I think there's more emphasis on American Writers now in the schools than there was uh back when I was in school so yeah um where does in terms of contemporary feminist what oh feminist feminist criticism yeah um one I think one of the disadvantages that that Dickens has had in the university is that his women characters female characters are not strong characters on the whole and so uh there's been comparatively little real interest wilky Collins who was one of his best friends the feminists are fascinated by wiy Collins for a number of I think the complicated reasons but uh feminists are not terribly interested in and that may also account for you know uh the lack of interest of at the University um II back there is going to tell us all about how postcolonial uh perspectives on Dickens and that seems to me to be a very valid and interesting the people who are interested in Social criticism do you see uh marxists and uh other people would find it it's to be really interesting uh because there's a kind of socialist um Dimension a lot what he does but not so much the women I those of us who are females that have read them there are very few female characters in that's that's my problem when I first read CER Fields you know that's just my yeah 54 years ago as a freshman in high school foration until this moment I never realized why they were doing that because I was Bor to death but now I realize uh the intellectual maturity yes of a Small Town Farmer back reading stories of a big large city yeah uh Urban problems Urban problems very complex a whole another way of living like you said a maturing intellectual maturity that's right that until this moment I realize yeah and I think I think it's also possible to say that the English are more in love with different than the Americans are uh I mean he has become so enshrined in England that uh if if if I were talking about a class that I had in England and asked all the same questions that I asked in my class the results would have been different I'm absolutely certain I enjoy teaching expectations many years ago my question question is how did he die um natural death I I don't know I mean he he uh he had been working till very close to the end of his life because that's the novel The Mystery of Edan Drew and uh it's really do you do you know you he did have gout and um another I don't know but he started his public speaking really wor him out yeah yeah uh I say he was writing up very close to the time that yeah that's also part of the cottage industry that is Dickens how does Dickens novel Edmund Drew live uh how does it end is he alive or is he not I mean it's it's a really kind of a a fun thing and any number of people have suggested endings or actually written alternative endings a little bit like they do with Jane Austin isn't it uh Lady Jane or something like that where you got to got how do you think it really ends and that also continues the reputation do you see what the Dickens does this expression mean what the Dickens does this expression mean what the difference I have no idea it was just waai it was it was a bit of a corruption on what the devil and if that was too harsh for your ptis if that was too harsh for your pietistic family life dick sa oh [Music] yes was in my you still have when I came to the he well you you really have a different view on this than we have do you see because in he belongs to you more than he belongs to to us from a artistic sense I how interesting was the first major celebrity of his time he he uh when I say rockstar I really meant it it it may be that people dick and Oscar wild he's the other one not because of the Grandeur of what he was doing but because of this ability to dramatize himself and to sell himself he seemed to have the ability to do this just as as Dickens did so I mean he was aware of his celebrity status and he cultivated it but he also hated it you know that's the same thing that that everybody says now is that they really like being admired but they want people to leave them alone and it doesn't go with do with curiosity he's lived on 200 yearsa people were making Pottery porcelain yes how did they protect they didn't have copyright so what happens with the I don't know at what point or even to what extent that there would be any copyright that would be functioning at that time because I don't know what copyright laws are for England you see it would be different that would be in America still buying that are being made absolutely absolutely but they are out of compromise by by almost all standards I would think it's an interesting question you know what and what's the limit of what you can do to these characters there is a Dickens thing called Dickens on zipped I don't know what that is and I I know that there's a shakeseare concept and I've avoided did that too but it does suggest that it's some kind of rum on Dickens and you know it it's just like the saturnist you you have a kind of a poetic um license to do things like that that may be the subject for a future freeze freeze offers unzipped the good news is friends two things uh thanks to our friends at Theos here in downtown Rock Island we've got some wonderful treats for you where you can carry on the conversation in the gallery right next door but this is not the end of our series you ever wondered when modern day journalist drops in the word dickensian well they're not just talking about the works of of Dickens they're talking about a world and next Tuesday if you'll join us at 2m Dr David Ellis from August's history department will tell us all about the world of Dickens but for now would you please join me in thanking our inaugural speaker Dr |
english_literature_lectures | A_Brief_History_of_English_and_American_Literature_part_1.txt | introduction and preface of a brief history of English and American literature this is a librivox recording all librivox recordings are in the public domain for more information or to volunteer please visit librivox.org recording by Kalinda a brief history of English and American literature by Henry a beers with introduction and supplementary chapters on the religious and Theological literature of Great Britain in the United States by John Fletcher Hearst introduction at the request of the publishers the undersigned has prepared this introduction and two supplementary chapters on the religious and Theological literature of Great Britain and the United States to the preacher in his preparation for the pulpit and also to the general reader and student of religious history the pursuit of the study of literature is a necessity the sermon itself is a part of literature must have its literary finish and proportions and should give ample proof of a familiarity with the masterpieces of the English tongue the world of letters presents to even the casual reader a rich and varied profusion of fascinating and luscious fruit but to the earnest student who explores with thorough research and sympathetic mind the intellectual products of countries and times other than his own the infinite variety so strikingly apparent to the superficial observer resolves itself into a beautiful and harmonious unity literature is the record of the struggles and aspirations of man in the boundless universe of thought as in physics the correlation and conservation of force bind all the material sciences together into one so in the world of the intellect all the diverse departments of mental life and action find their common bond in literature even the signs and formulas of the mathematician and the chemist are but abbreviated forms of writing stenography of those exact science the simple chronicles of the analyst the flowing verses of the poet clothing his thought with winget words the abstruse propositions of the philosopher the smiting protests of the bold reformer either in church or state the impassioned appeal of the advocate at the bar of justice the argument of the legislature on behalf of his measures the very cry of inarticulate pain of those who suffer under the oppression of cruelty all have their literature the minister of the gospel whose mission is to man in his highest and holiest relations must know the best that human thought has produced if he would successfully reach and influence the thoughtful and inquiring perhaps our best service here will be to suggest a method of pursuing a course of study in literature both English and American the following work of Professor Beers touches but lightly and scarcely more than opens these broad and inviting fields which are ever-growing richer and more fascinating while man continues to think he will weave the fabric of the mental loom into infinitely varied and beautiful designs in the general outlines of a plan of literary study which is to cover the entire history of English and American literature the following directions it is hoped will be of value one fix the great landmarks the general periods each marked by some towering leader around whom other contemporary writers may be grouped in Great Britain the several and successive periods might thus be well it designated by such authors as Geoffrey Chaucer or John Wycliffe Thomas More or Henry Howard edmund spenser or Sir Walter Raleigh William Shakespeare or Francis Bacon John Milton or Jeremy Taylor John Dryden or John Locke Joseph Addison or Joseph Butler Samuel Johnson were Oliver Goldsmith William Cowper or John Wesley Walter Scott or Samuel Taylor Coleridge William Wordsworth or Thomas Chalmers Alfred Tennyson Thomas Carlyle or William Makepeace Thackeray a similar list for American literature would place as leaders in letters Thomas Hooker or Thomas Shepard Cotton Mather Jonathan Edwards Benjamin Franklin Philip Freneau Noah Webster or James Kent James Fenimore Cooper or Washington Irving Ralph Waldo Emerson or Edward Everett Joseph Addison Alexander or William Ellery Channing Henry Wadsworth Longfellow James Russell Lowell or Nathaniel Hawthorne the prosecution of the study might be carried on in one or more of several ways according either to the purpose in view or the tastes of the student attention might profitably be concentrated on the literature of a given period and worked out in detail by taking up individual authors or by classifying all the writers of the period on the basis of the character of their writings such as poetry history baletta theology essays and the like again the literature of a period might be studied with reference to its influence on the religious commercial political or social life of the people among whom it is circulated or as the result of certain forces which have preceded its production it is well worth the time and effort to trace the influence of one author upon another or many others who while maintaining their individuality have been either in style or method of production unconsciously molded by their confreres of the pen the divisions of writers may again be made with reference to their opinions and associations in the different departments of life where they have brought their active labors such as in politics religion moral reform or educational questions the influence of the great writers in the languages of a continent upon the literature of England and America affords another theme of absorbing interest and has its peculiarly good results in bringing the student into close Brotherhood with the fruitful and cultured minds of every land in fact the possible applications of the study of literature are so many and varied that the ingenuity of any earnest student may devise such as the exigencies of his own work may require John F Hearst Washington preface in so brief a history of so rich a literature the problem is how to get room enough to give not an adequate impression that is impossible but any impression at all of the subject to do this I have crowded out everything but Bell lettre books in philosophy history science etc however important in the history of English thought received the merest incidental mention or even no mention at all again I have omitted the literature of the anglo-saxon period which is written in a language nearly as hard for a modern Englishman to read as German is or Dutch Caedmon and Cinna Woolf are no more part of English literature than Virgil and Horace are of Italian I have also left out the vernacular literature of the Scotch before the time of burns up to the date of the Union Scotland was a separate Kingdom and its literature had a development independent of the English though parallel with it in dividing the history into periods I have followed with some modifications the divisions made by mr. Stopford Brook in his excellent little primer of English literature a short reading course is appended to each chapter Henry a beers end of introduction and preface one chapter one of a brief history of English and American literature this LibriVox recording is in the public domain recording by Kalinda a brief history of English and American literature by Henry a beers part 1 chapter 1 from the conquest to Chaucer 1066 to 1400 the Norman conquest of England in the 11th century made a break in the natural growth of the English language and literature the old English or anglo-saxon had been a purely Germanic speech with a complicated grammar and a full set of inflections for 300 years following the Battle of Hastings this native tongue was driven from the Kings court and the courts of law from Parliament school and university during all this time there were two languages spoken in England Norman French was the birth tongue of the upper classes and English of the lower when the latter finally got the better in the struggle and became about the middle of the 14th century the National speech of all of England it was no longer the English of King Alfred it was a new language a grammar less tongue almost wholly stripped of its inflections it had lost a half of its old words and had filled their places with french equivalents the norman lawyers had introduced legal terms the ladies and courtiers words of dress and courtesy the knight had imported the vocabulary of war and of the chase the master-builders of the norman castles and cathedrals contributed technical expressions proper to the architect and the mason the art of cooking was French the naming of the living animals ox swine sheep deer was left to the Saxon charl who had the herding of them while the dressed meats beef pork mutton venison received their baptism from the table talk of his Norman master the four orders of begging friars and especially the Franciscans or gray friars introduced into England in 1224 became intermediaries between the high and the low they went about preaching to the poor and in their sermons they intermingled French with English in their hands - was almost all the science of the day their medicine botany and astronomy displayed the old nomenclature of leech them wort cunning and Starcraft and finally the translators of French poems often found it easier to transfer a foreign word bodily than to seek out a native synonym particularly when the former supplied them with a rhyme but the innovation reached even to the communist words in everyday use so that voice drove out stiffen poor drove out arm and color use and place made good their footing beside hue want instead a great part of the English words that were left were so changed in spelling and pronunciation as to be practically new Chaucer stands in date midway between King Alfred and Alfred Tennyson but his English differs vastly more from the former's than from the ladders to Chaucer anglo-saxon was as much a dead language as it is to us the classical anglo-saxon moreover had been the Wessex dialect spoken and written at Alfred's capital Winchester when the French had displaced this as the language of culture there was no longer a King's English or any literary standard the sources of modern Standard English are to be found in the east Midland spoken in Lincoln Norfolk Suffolk Cambridge and the neighboring shires here the old Anglian had been corrupted by the Danish settlers and rapidly threw off its inflections when it became a spoken and no longer a written language after the conquest the West Saxon clinging more tenaciously to ancient forms sunk into the position of a local dialect while the East Midland spreading to London Oxford and Cambridge became the literary English in which Chaucer wrote the Normans brought in also new intellectual influences and new forms of literature there were a cosmopolitan people and they connected England with the continent LAN Frank and Anselm the first - Norman Archbishop's of Canterbury were learned and lended prelate of a type quite unknown to the anglo-saxons they introduced the scholastic philosophy taught at the University of Paris and the reformed discipline of the Norman Abbey's they bound the English church more closely to Rome and officered it with Normans English bishops were deprived of their C's for illiteracy and French Abbott's were set over monasteries of sacks and monks down to the middle of the 14th century the learned literature of England was mostly in Latin and the polite literature in French English did not at any time altogether cease to be a written language but the extant remains of the period from 1066 to 1200 are few and with one exception unimportant after 1200 English came more and more into written use but mainly in translations paraphrases and imitations of French works the native genius was at school and followed awkwardly the copy set by this master the anglo-saxon poetry for example had been rhythmical and alliterative it was commonly written in lines containing four rhythmical accents and with three of the accented syllables alliterated Resta Heaney tharam heart races lavada kayoppe and gold Fah guests in a swift rested him then the great hearted the hall towered Rumi and gold bright the guest slept within this rude energetic verse the Saxon shop had sung to his heart or his Glee beam dwelling on the emphatic syllables passing swiftly over the others which were of undetermined number and position in the line it was now displaced by the smooth metrical verse with rhymed endings which the French introduced and which our modern poets use a verse fitted to be recited rather than sung the old english alliterative verse continued indeed an occasional use to the sixteenth century but it was linked to a forgotten literature and an obsolete dialect and was doomed to give way Chaucer lent his authority to the more modern verse system and his own literary models and inspires were all foreign French or Italian literature in England began to be once more English and truly national in the hands of Chaucer and his contemporaries but it was the literature of a nation cut off from its own past by three centuries of foreign rule the most noteworthy English document of the eleventh and twelfth centuries was the continuation of the anglo-saxon chronicle copies of these annals differing somewhat among themselves have been kept at the monasteries in Winchester Abingdon Worcester and elsewhere the yearly entries were mostly brief dry records of passing events though occasionally they became full and animated the Fenn country of Cambridge in Lincolnshire was a region of monasteries here with a great Abbey's of Peterborough and croix Island and le Minster one of the earliest English songs tells how the savage heart of the Danish King Canute was softened by the singing of the monks in le Marius song and magnificent in le duc note Canon Rio Thayer by row with knitted neatest nor the lond and here will this mini his sung it was among the dikes and marches of this Fenn country that the bold outlaw hare Avadh the last of the English held out for some years against the Conqueror and it was here in the rich Abbey of birth or Peterborough the ancient mid is Hampstead meadow homestead that the chronicle was continued for nearly a century after the conquest breaking off abruptly in 1154 the date of King Stephens death Peterborough had received a new norman abbot torold a very stern man and the entry in The Chronicle for 1170 tells how Harry Vardon his gang with his Danish backers thereupon plundered the abbey of its treasures which were first removed to le and then carried off by the Danish fleet and sunk lost or squandered the English in the later portions of this Peterborough chronicle be gradually more modern and falls away more and more from the strict grammatical standards of the classic anglo-saxon it is a most valuable historical document and some passages of it are written with great vividness notably the sketch of William the Conqueror put down in the year of his death 1086 by one who had looked upon him and at another time dwelt in his court Hugh was before a rich King and Lord of many a land he had not then of all his land but a piece of seven feet likewise he was a very stark man and a terrible so that one Durst do nothing against his will among other things is not to be forgotten the good piece that he made in his land so that a man might fare over his kingdom with his bosom full of gold unhurt he set up a great deer Preserve and he laid laws there with that whoso should slay heart or hind he should be blinded as greatly did he love the tall deer as if he were their father with the discontinuance of the Peterborough annals English history written in English prose ceased for 300 years the thread of the nation's story was kept up in Latin chronicles compiled by writers partly of English and partly of Norman descent the earliest of these such as order echoes Vitalis simeon of durham henry of huntington and william of malmesbury were contemporary with the later entries of the saxon chronicle the last of them Matthew of Westminster finished his work in 1273 about 1300 a monk of Gloucester composed a chronicle in English verse following in the main the authority of the Latin chronicles and he was succeeded by other rhyming chroniclers in the 14th century in the hands of these the true history of the Saxon times was overlaid with an ever-increasing mass of fable and legend all real knowledge of the period dwindled away until in cap graves chronicle of England written in prose in fourteen 63 to 64 hardly anything of it is left in his during as in literature the English had forgotten their past and had turned to foreign sources it is noteworthy that Shakespeare who borrowed his subjects and his heroes sometimes from the authentic English history sometimes from the legendary history of ancient Britain Denmark and Scotland as in Lear Hamlet and Macbeth ignores the Saxon period altogether and Spencer who gives in his second book of the faerie queene a resume of the reigns of fabulous British Kings the supposed ancestors of Queen Elizabeth his royal patron has nothing to say of the real kings of early England so completely had the true record faded away that it made no appeal to the imaginations of our most patriotic poets the Saxon Alfred had been dethroned by the British Arthur and the conquered Welsh had imposed their fictitious genealogies upon the dynasty of the conquerors in the home and oh who a verse chronicle of the Dukes of Normandy written by the Norman wastes it is related that at the Battle of Hastings the French jungler ty fare spurred out before the van of William's army tossing his Lance in the air and chanting of Shalom Anja and of Rollo of Oliver and the peers who died at hozeva this incident is prophetic of the victory which Norman song no less than Norman arms was to win over England the lines which tie fair sang were from the shale Sandow Rowland the oldest and best of the French hero sagas the heathen north men who had ravaged the coasts of France in the 10th century had become in the course of 150 years completely identified with the French they had accepted Christianity intermarried with the native women and forgotten their own Norse tongue the race thus formed was the most brilliant in Europe the warlike adventurous spirit of the Vikings mingled in its blood with the French nimbleness of wit and fondness for display the Normans were a nation of Knights errant with a passion for prowess and for courtesy their architecture was at once strong and graceful their women were skilled in embroidery a splendid sample of which is preserved in the famous by a tempest in which the conquerors wife Matilda and the ladies of her Court wrought the history of the conquest this national taste for decoration expressed itself not only in the ceremonies pomp of feast and chase and tourney but likewise in literature the most characteristic contribution of the Normans to English poetry were the metrical romances or chivalry tales these were some were recited by the minstrels who were among the retainers of every great feudal baron or by the junglers who wandered from court to castle there is a whole literature of these home on Devon - in the anglo-norman dialect of French many of them are very long often thirty forty or fifty thousand lines written sometimes in Ostrava form sometimes in long alexandra's but commonly in the short eight syllable rhyming couplet numbers of them were turned into english verse in the thirteenth fourteenth and fifteenth centuries the translations were usually inferior to the originals the French cuvee finder or poet told in his story in a straightforward prosaic fashion omitting no details in the action and unrolling endless descriptions of dresses trappings gardens etc he invented plots and situations full of fine possibilities by which later poets have profited but his own handling of them was feeble and prolix yet there was a simplicity about the old French language and a certain elegance and delicacy in the diction of the Truvia which the rude unformed english failed to catch the heroes of these romances were of various climes Guy of Warwick and Richard the Lionheart of England have alakh the Dane Sir toyless of Troy Charlemagne an Alexander but strangely enough the favorite hero of English romance was that mythical Arthur of Britain whom Welsh legend had celebrated as the most formidable enemy of the Cesana invaders and their victor in twelve great battles the language and literature of the Ancients similar or Welsh had made no impression on their anglo-saxon conquerors there were a few Welsh borrowings in the English speech such as bard and druid but in the old anglo-saxon literature there are no more traces of British song and story than if the two races had been sundered by the ocean instead of being Borderers for over 600 years but the Welsh had their own national traditions and after the Norman Conquest these were set free from the isolation of their Celtic tongue and in an indirect form entered into the general literature of Europe the French came into contact with the old British literature in two places in the Welsh Marches in England and in the province of Brittany and France where the population is of cymraeg race and spoke and still to some extent speaks a Simek dialect akin to the Welsh about 1140 Geoffrey of Monmouth a Benedictine monk seemingly of Welsh descent who lived at the court of Henry the first and became afterward Bishop of st. Asaph produced in Latin a so-called historia brat onnum in which it was told how Brutus the great grandson of Aeneas came to Britain and founded there his kingdom called after him and his city of New Troy Troy Novant on the site of the later London an air of historic gravity was given to this tissue of Welsh legends by an exact chronology and the genealogy of the British Kings and the author referred as his authority to an imaginary Welsh book given him as he said by a certain Walter Archdeacon of Oxford here appeared that line of fabulous British princes which has become so familiar to modern readers in the plays of Shakespeare and the poems of Tennyson Lear and his three daughters Cymbeline Garbo Duke the subject of the earliest regular English tragedy composed by Sackville and acted in 1562 Loughran and his Queen Gwendolyn and his daughter Sabrina who gave her name to the River Severn was made immortal by an exquisite song in Milton's Como's and became the heroine of the tragedy of block rain once attributed to Shakespeare and above all Arthur the son of Uther Pendragon and the founder of the table round in 1155 waits the the homeowner who turns Geoffrey's work into a French poem entitled blue Tonglet a brute being a Welsh word meaning Chronicle about the year 1200 wases poem was English by laymen a priest of arleigh Reyes on the border stream of seven lemons Brut is in 30,000 lines partly alliterative and partly rhymed but written in pure Saxon English with hardly any French words the style is rude but vigorous and at times highly imaginative ways had amplified Jeffrey's Chronicle somewhat but laymen made much larger additions derive no doubt from legends current on the Welsh border in particular the story of Arthur grew in his hands into something like fullness he tells of the enchantments of Merlin the wizard the unfaithfulness of Arthur's Queen Guinevere and the treachery of his nephew Madrid his narration of the last great battle between Arthur and Madrid of the wounding of the king fifteen fiend lis wounds he had one might in the least three gloves thrust and of the little boat with two women there in wonder lead it-- which came to bear him away to Avalon and the Queen argon type she missed of all elves whence he shall come again according to Merlin's prophecy to rule the Britons all this left little in essentials for Tennyson to add in his death of Arthur this new material for fiction was eagerly seized upon by the Norman romancers the story of Arthur drew to itself other stories which were afloat Walter map a gentlemen of the court of Henry the second in two French prose romances connected with it the church legend of the sangria or holy cup from which Christ had drunk at his last supper and which joseph of arimathea had afterward brought to england then it miraculously disappeared and became thence forth the occasion of Knightly quest the mystic symbol of the object of the souls desire and adventure only to be achieved by the maiden Knight Galahad the son of the great Lancelot who in the romances had taken the place of Madrid in Jeffries history as the or more of Queen Guinevere in like manner the love story of Tristan and Isolde was joined by other Rome answers to the Arthur saga this came probably from Brittany or Cornwall thus there grew up a great epic cycle of Arthurian romance with a fixed shape and a unity and vitality which have prolonged it to our own day and rendered it capable of a deeper and more spiritual treatment and a more artistic handling by such modern English poets as Tennyson in his idols of the king by Matthew Arnold Swinburne and many others there were innumerable Arthur romances in prose and verse in England Ormond and continental French dialects in English in German and in other tongues but the final form which the saga took in medieval England was the prose martyr of Sir Thomas Malory composed at the close of the 15th century this was a digest of the earlier romances and is Tennyson's main authority beside the literature of the night was the literature of the cloister there is a considerable body of religious writing in early English consisting of homilies and prose and verse books of devotion like the angkeran rue the rule of Ankara PSA's 1225 the a and bit of in wit remorse of conscience 1340 both in prose the handloom sinner 13:03 the courser Mundi 1320 and the prick of conscience 1340 in verse metrical renderings of the Psalter the Paternoster the creed and the 10 commandments the gospels for the day such as the or Muallim or book of armed 12:05 legends and miracles of saints poems in praise of virginity on the contempt of the world on the five joys of the Virgin the five wounds of Christ the eleven pains of Hell the seven deadly sins the 15 tokens of the coming judgement and dialogues between the soul and the body these were the work not only of the monks but also of the begging friars and in smaller part of the secular or parish clergy they are full of the ascetic piety and superstition of the middle age the childish belief in the marvellous the allegorical interpretation of the scripture texts the grotesque material horrors of hell with its grisly fiends the vileness of the human body and the loathsome details of its corruption after death now and then a single poem rises above the tedious and hideous barbarism of the general level of this monkish literature either from a more intensely personal feeling in the poet or from an occasional grace or beauty in his verse a poem so distinguished for example Aluva Ron a love counsel by the minor right friar Thomas de Hales 1 stands up which recalls the French poet vos Ballad of dead ladies with its refrain may you Solon edge Danton where are the snows of yesteryear where is Paris and Ellen that were in so bright and fair of Blee a modest Tristan and Eden Ezard and olive a Hector with his sharp ermine and sees a rich in worlds Fay they bathe the Glidden out of the rain as the shaft is of the day a few early English poems on secular subjects are also worthy of mention among others the owl and the nightingale generally assigned to the reign of Henry the 3rd 1216 to 1272 and estra for dispute in which the owl represents the ascetic and the nightingale the aesthetic view of life the debate is conducted with much animation and a spirited use of proverbial wisdom the land of Cockaigne yo is an amusing little poem of some 200 lines belonging to the class of fab leo short humorous tales or satirical pieces in verse it describes a lubber land or fool's paradise where the geese fly down all roasted on the spit bringing garlic in the bills for the dressing and where there is a nunnery upon a river of sweet milk and an abbey of white monks and grey whose walls like the Hall of little King Pepin are of pie crusts and pastry crust was Florin cakes for the shingles and fat puddings for the pins there are a few songs dating from about 1300 and mostly found in the single election Harl manuscript 2253 which are almost the only English verse before Chaucer that has any sweetness to a modern ear they are written in French strophic forms in the southern dialect and sometimes have an inter mixture of French and Latin lines they are musical fresh simple and many of them are very pretty they celebrate the gladness of spring with its cuckoos and throstle it's daisies and Woodruff when the nightingale sings the woods wax and green leaf and grass and blossoms spring in April I ween and love is to be my heart gone with a spear so keen night and day my blood drinks my heart doth Mateen others are love planes to Alison or some other lady whose name is in a note of the nightingale whose eyes are as gray as glass and her skin as red as rose on wrists some employ a burden or refrain Blow northern wind blow them me my Sweden blow northern wind blow blow blow others are touched with a light melancholy at the coming of winter winter waken ahthe all my care now these leave is waxeth bear oft I sigh and Morna sir when it cometh in my thought of this world's joy how it goeth all to naught some of these poems are love songs to Christ or the Virgin composed in the warm language of earthly passion the sentiment of chivalry United with The Ecstatic reveries of the cloister had produced Mariola tree and the imagery of the Song of Solomon in which Christ was the soul and mated this feeling of divine love familiar toward the end of the 13th century a collection of lives of saints a sort of English golden legend was prepared at the great Abbey of Gloucester for use on Saints days the legends were chosen partly from the hey geology of the church Catholic as the lives of Margaret Christopher and Michael partly from the calendar of the English church as the lies of st. Thomas of Canterbury of the anglo-saxons Dunstan's within who is mentioned by Shakespeare and kennel whose life is quoted by Chaucer in the non-depressed Tail the verse was clumsy and the style monotonous but an imaginative touch here and there has furnished a hint to later poets thus the legend of Saint Brandon's search for the earthly paradise has been treated by matthew arnold and william morris about the middle of the 14th century there was a revival of the old english alliterative verse in romances like william in the werewolf and Sir Gawain and in religious pieces such as colonist purity patience and the Pearl the last-named a mystical poem of much beauty in which a bereaved father sees a vision of his daughter among the glorified some of these employed rhyme as well as the literation they are in the West Midland I elect although Chaucer implies that alliteration was most common in the north I am a southern man says the parson in the Canterbury Tales I cannot just roam ram roof by my letter but the most important of the alliterative poems was the vision of william conquering piers the ploughman in the second half of the 14th century French had ceased to be the mother tongue of any considerable part of the population of England by a statute of Edward the third in 1362 it was displaced from the law courts by 1386 English had taken its place in the schools the anglo-norman dialect had grown corrupt and Chaucer contrasts the French of Paris with the provincial French spoken by his prior s after the skull of Stratford at BO the native English genius was also beginning to assert itself rouse in part perhaps by the English victories and the Wars of Edward the third against the French it was the bows of the English yeoman ray that won the fight at Crissy fully as much as the prowess of the Norman baronage but at home the times were bad heavy taxes and the repeated visitations of the pestilence or black death pressed upon the poor and wasted the land the church was corrupt the mendicant orders had grown enormously wealthy and the country was eaten up by a swarm of begging friars partners and a parrot or the social discontent was fermenting among the lower classes which finally issued in the communistic uprising of the peasantry under wat Tyler and Jack Straw this state of things is reflected in the vision of Piers Plowman written as early as 1362 by William Langland a tonsured clerk of the West Country it is in form of an allegory and bears some resemblance to the later and more famous allegory of the pilgrims progress the poet falls asleep on the Malvern Hills in Worcestershire and has a vision of a fair field full of folk representing the world with its various conditions of men there were pilgrims and Palmer's Hermits with hooked staves who went to Walsingham and their wenches after them great lovers and long that were lost to work friars glossing the gospel for their own profit Pardoner's cheating the people with relics and indulgences parish priests who forsook their parishes that had been poor since the pestilence time and went to London to sing therefore simony bishops Archbishop's and deacons who got themselves fat clerkships in the Exchequer or King's Bench in short all manner of lazy and corrupt ecclesiastics a lady who represents Holy Church then appears to the dreamer explaining to him the meaning of his vision and reads to him a sermon the text of which is when all treasure is tried truth is the best a number of other allegorical figures are next introduced conscience Reason Mead cemani falsehood etc and after a series of speeches and adventures a second vision begins in which the seven deadly sins passed before the poet in a succession of graphic impersonations and finally all the characters set out on the pilgrimage in search of saint truth finding no guide to direct them save pierce the ploughman who stands for the simple pious labouring man the sound heart of the english common folk the poem was originally in 8 divisions or passes to which was added a continuation in three parts Vita Deauville dobut and do best about 1377 the whole was greatly enlarged by the author P Plowman was the first in extended literary work after the conquest which was purely English in character it owed nothing to France but the allegorical cast which the whole mandala hoes had made fashionable in both countries but even here such personified abstractions as Langlands fair speech and work when time is remind us less of the phone she's Bella more and fell symbol of the French courtly allegories than of bunions mr. worldly Wiseman and even of such Puritan names as praise God bare bones and zeal of the land busy the poem is full of English moral seriousness of shrewd humour the hatred of a lie the homely English love for reality it has little unity of plan but is rather a series of episodes discourses parables and scenes it is all astir with the actual life of the time we see the gossips gathered in the alehouse of betting the brewster and the pastry cooks in the London streets crying hot pies hot good geese and grease go we dine go we head Langland not linked his literary fortunes with an uncouth and obsolescent verse and had he possessed a finer artistic sense and a higher poetic imagination his book might have been like Chaucer's among the lasting glories of our tongue as it is it is forgotten by all but professional students of literature and history its popularity in its own day is shown by the number number of manuscripts which are extant and imitation such as pierced the ploughman's Creed 1394 and the plowman's tale for a long time wrongly inserted in the Canterbury Tales Pierce became a kind of typical figure like the French peasant jock poem and was appealed to as such by the Protestant reformers of the 16th century the attack upon the growing corruptions of the church was made more systematically and from the standpoint of a theologian rather than of a popular moralist and satirist by John Wycliffe the rector of letter worth and professor of divinity in Balliol College Oxford in a series of Latin and English tracts he made war against indulgences pilgrimages images ablations the Friars the Pope and the doctor of transubstantiation but his greatest service to England was his translation of the Bible the first complete version in the mother tongue this he made about 1380 with the help of Nicholas Harford and a revision of it was made by another disciple Purvi some 10 years later there was no knowledge of Hebrew or Greek in England at that time and the Wycliffe eight versions were made not from the original tongues but from the Latin Vulgate in his anxiety to make his rendering close and mindful perhaps of the warning in the Apocalypse If any man shall take away from the words of the book of this prophecy God shall take away his part out of the book of life Wycliffe followed the Latin order of construction so literally as to make rather awkward English translating for example quids if evil talks omnium by what to itself wall de Slavin pervaiz version was somewhat freer and more idiomatic in the reigns of Henry the fourth and fifth it was forbidden to read or to have any of Wycliffe's writings such of them as could be seized were publicly burned in spite of this copies of his Bible circulated secretly in great numbers for shell and Madden in their great edition 1850 and numerate 150 manuscripts which had been consulted by them later translators like tine Dale and the makers of the authorized version or King James Bible 1611 followed Wycliffe's language in many instances so that he was in truth the first author of our biblical dialect and the founder of that great monument of noble English which has been the main conservative influence in the mother tongue holding it fast to many strong pithy words and idioms that would else have been lost in 1415 some 30 years after Wyclef's death by decree of the Council of Constance his bones were dug up from the soil of letter with chancel and burned and the ashes cast into the Swift the brook says Thomas fuller in his church history did convey his ashes into avon avon into seven Severn into the narrow seas they into the main ocean and thus the ashes of Wycliffe are the emblem of his which now is dispersed all the world over although the writings thus far mentioned are a very high interest to the student of the English language and the historian of English manners and culture they cannot be said to have much importance as mere literature but in Geoffrey Chaucer died fourteen hundred we meet with a poet of the first rank whose works are increasingly read and will always continue to be a source of delight and refreshment to the general reader as well as a well of English undefiled to the professional man of letters with the exception of Dante Chaucer was the greatest of the poets of medieval Europe and he remains one of the greatest of English poets and certainly the foremost of English storytellers in verse he was the son of a London vintner and was in his youth in the service of Lionel Duke of Clarence one of the sons of Edwards a third he made a campaign in France 13 59 to 60 when he was taken prisoner afterward he was attached to the court and received numerous favors and appointments he was sent on several diplomatic missions by the king three of them to Italy where in all probability he made the acquaintance of the new Italian literature the writings of Dante Petrarch and Boccaccio he was appointed at different times comptroller of the wool customs Comptroller of petty customs and clerk of the works he sat for Kent in Parliament and he received pensions from three successive Kings he was a man of business as well as books and he loved men and nature no less than study he knew his world he saw life steadily and saw at whole living at the centre of English social and political life and resorting to the court of Edward the third then the most brilliant in Europe Chaucer was an eyewitness of those feudal pumps which filled the high colored pages of his contemporary the French chronicler puasa his description of a tournament in the Knights tale is unexcelled for spirit and detail he was familiar with dances feasts and state ceremonies and all the life of the baronial castle in Bower and Hall the Trump's with the loud Minister Elsie the Herald's the ladies the end the Squires what Hawks sitting on the perch above what hounds Ligon on the floor down but as sympathy reached no less the life of the lowly the poor widow in her narrow cottage and the true finger and a good the ploughman whom Langland had made the hero of his vision he is more than all English poets the poet of the lusty spring of April with her showers sweet and of the fowl a song of May with all her flowers under green of the new leaves in the wood and the meadows new powdered with the Daisy the mystic Marguerite of his legend of good women a fresh vernal air blows through all his pages in Chaucer's earlier work such as the translation of the Rome and of the Rose if that be his the book of the Duchess the Parliament of fowls the house of Fame as well as in the legend of good women which was later the inspiration of the French Court poetry of the 13th and 14th centuries is manifest he retains in them the medieval machinery of allegories and dreams the elaborate descriptions of palaces temples portraitures etc which had been made fashionable in France by such poems as Guillaume Dolores as a home azulejos and Joma shows la Fontaine Amara's in some of these the influence of Italian poetry is also perceptible there are suggestions from Dante for example in the Parliament of fowls and the house of Fame and Troilus and Cressida is a free handling rather than a translation of focaccia chose philostrate Oh in all of these there are passages of great beauty and Force had Chaucer written nothing else he would still have been remembered as the most accomplished English poet of his time but he would not have risen to the rank which he now occupies as one of the greatest English poets of all time this position he owes to his masterpiece The Canterbury Tales here he abandoned the imitation of foreign models and the artificial literary fashions of his age and wrote of real life from his own ripe knowledge of men and things The Canterbury Tales are a collection of stories written at different times but put together probably toward the close of his life the framework into which they are fitted is one of the happiest ever devised and number of pilgrims who are going on horseback to the shrine of st. thomas a becket at canterbury meet at the Tabard Inn in South work a suburb of London the jolly host of the Tabard Harry Bailey proposes that on their way to Canterbury each of the company shall tell two tales and two more on their way back and that the one who tells the best shall have a supper at the cost of the rest when they return to the inn he himself accompanies them as judge and reporter in the setting of the stories there is thus a constant feeling of movement and the air of all outdoors the little head links and end links which bind them together give incidents of the journey and glimpses of the talk of the pilgrims sometimes amounting as in the prologue of the Wife of Bath too full and almost dramatic character sketches the stories too are dramatically suited to the narrator's the general prologue is a series of such character sketches the most perfect in English poetry the portraits of the pilgrims are illuminated with the soft brilliancy and the minut loving fidelity of the miniatures in the old missiles and with the same quaint precision in traits of expression and in costume the pilgrims are not all such as one would meet nowadays at an English in the presence of a knight a squire a yeoman Archer and especially of so many kinds of ecclesiastics anon a friar a monk a partner and a salt nor or a parrot or reminds us that the England of that day must have been less like Protestant England as we know it then like the Italy of some thirty years ago but however the outward face of society may have changed the Canterbury pilgrims remain in Chaucer's description living and Universal types of human nature The Canterbury Tales are 24 in number there were 32 pilgrims so that if finished as designed the whole collection would have numbered 128 stories Chaucer is the bright consummate flower of the English middle age like many another great poet he put the final touch to the various literary forms that he found in cultivation thus his Knight's Tale based on Boccaccio's to Seder is the best of English medieval romances and yet Rhyme of sir thopas who goes seeking an elf queen for his mate and has encountered by this giant sir Oliphant burlesque's these same romances with their impossible adventures and their tedious rambling descriptions the tales of the Prioress and the second none are Saints legends the monks tale is a set of dry moral epilogues in the manner of his contemporary the moral Gower the stories told by the Reeve Miller friar sample or Shipman and merchant belonged to the class of fab Leo a few of which existed in English such as Dame Suri's the lay of the ash and the land of Cockaigne iya already mentioned the nun priest's tale likewise which Dryden modernized with admirable humor was of the class of fab leo and was suggested by a little poem in 40 lines du Coq of eppy by marie de france a norman poetess of the 13th century it belonged like the early for english poem of the Fox and the wolf to the popular animal saga of Granada the Fox the Franklin's Tale who's seen as Brittany and the wife of bath's tale which is laid in the time of the British Arthur belonged to the class of French lay serious metrical tales shorter than the romance and of Breton origin the best representatives of which are the elegant and graceful leis of Mahadev Falls Chaucer was our first great master of laughter and of Tears his serious poetry is full of the tenderest pathos his loosest tales are delightfully humorous and lifelike he is the kind Laius of satirist s' the knavery greed and hypocrisy of the begging friars and the sellers of indulgences are exposed by him as pitilessly as by langland and Wycliffe though his mood is not like there's one of stir and moral indignation but rather the good-natured scorn of a man of the world his charity is broad enough to cover even the corrupt psalm nor of whom he says and yet in sooth he was a good fellow whether he shared Wyclef's opinions is unknown but John of Gaunt the Duke of Lancaster and father of Henry the fourth who was Chaucer's lifelong patron was likewise Wyclef's great upholder against the execution of the bishops it is perhaps not without significance that the poor parson in The Canterbury Tales the only one of his ecclesiastical pilgrims whom Chaucer treats with respect is suspected by the host of the Tabard to be a Lawler that is a lard or disciple of Wycliffe and that because he objects jovial innkeeper swearing by God's bones Chaucer's English is nearly as easy for a modern reader as shakespeare's and few of his words have become obsolete his verse when rightly read is correct and melodious the early English was in some respects more sweet upon the tongue than the modern language the vowels had their broad Italian sounds and the speech was full of soft gutturals and vocollect syllables like the endings in in is and a which made feminine rhymes and kept the consonants from coming harshly together great poetess Chaucer was he was not quite free from the literary weakness of his time he relapses sometimes into the babbling style of the old chroniclers and legend writers cite sock tours and gives long catalogues of names and objects with a naive display of learning and introduces vulgar details in his most exquisite passages there is something childish about almost all the thought and art of the Middle Ages at least outside of Italy where classical models and traditions never quite lost their hold but Chaucer's artlessness is half the secret of his wonderful ease in storytelling and is so engaging that like a child sweet unconsciousness one would not wish it otherwise The Canterbury Tales had shown of what high uses the English language was capable but the curiously trilingual condition of literature still continued French was spoken in the Proceedings of Parliament as late as the reign of Henry the sixth 1422 to 1471 Chaucer's contemporary John Gower wrote his vox Clemente's in latin his speculum medet aunty's a lost poem and a number of ballads in Parisian French and his confessor amantes 1393 in English the last-named is a dreary pedantic work in some fifteen thousand smooth monotonous eight syllable couplets in which condom or instructs the lover how to get the love of bell apostle end of part one chapter one |
english_literature_lectures | The_Charles_Dickens_phenomenon_University_of_Reading_public_lecture_series_201213.txt | [Music] the reason Dickens continues to appeal in the 21st century is that he deals with Timeless qualities qualities to do with compassion and the imagination so if we look at Tiny Tim in A Christmas Carol the way in which he tugs at our heartstrings is still relevant today in Oliver Twist one of dickens's most famous novels Dickens portrays rather gruesomely the death of Nancy The Prostitute what's interesting about that scene is that it's weirdly poetic and weirdly beautiful with light and color and lots of images of dancing sunlight through costly colored glass and paper mended window through Cathedral Dome and rotten crevice it shed its equal Ray it lighted up it lighted up the room where a murdered woman lay 150 years ago Dickens taught us to appreciate imagination and fancy in a world which was at that time obsessed with detail and fact what I'd like to do is see dickins as values being brought into the 21st century and for people to appreciate that we need arts and we need poetry and we need the imagination on top of the business and the science cultures that we [Music] have |
english_literature_lectures | The_Mind_and_Times_of_Virginia_Woolf_Part_1_of_3.txt | [Music] it's clear from the evidence that Virginia wolf could be described as manic depressive and who knows if she had had lithium uh she might have lived longer we don't know that she did alternate between periods of mania and high excitement and periods of very inert depression she suffered terribly from sleeplessness she had appalling headaches I mean these are not just headaches as you and I know them but these are really terrible incapacitating Hees akes she clearly suffered tremendously um from a lot of physical pain all through her life and I think her life is a story of great courage and [Music] stoicism two days ago Sunday the 16th of April 1939 to be precise Nessa said that if I did not start writing my Memoirs I should soon be too old there are several difficulties in the first place the enormous number of things I can remember many bright colors many distinct sounds some human beings caricatures comic several violent moments of being always including a circle of the scene which they cut out and all surrounded by a vast space that is a rough visual description of childhood this is how I shape it and how I see myself as a child Virginia wolf was born in London her parents were uh Leslie and Julia Steven her mother was descended from an Anglo Indian family the women in the family were famous for their beauty and something of that Virginia wolf inherited her father Leslie Steven who became sir Leslie Steven was an eminent author and editor he edited 26 volumes of the dictionary of national biography he was really at the very center of the English literary establishment her father and her mother were both on second marriages they were both Wom um they were much older than the group of children that um started with Vanessa than there was Toby Virginia and Adrien so she grew up with parents she said were really the age of grandparents and her mother Julia had had two sons by her previous marriage George and Gerald Duckworth um one of these two brothers has become notorious because later on in life Virginia wolf a memoir in which she suggested that George Duckworth had sexually molested her as a child there was obviously uh some very traumatic sexual interference going on and there is a school of thought that argues that her life is dominated by uh childhood sexual abuse I am not of that opinion uh because I don't read her life as that of a victim she grew up in a very Victorian household despite the fact she was born in 1882 very very very near the end of the century and she basically until the death of her father lived under quite Victorian circumstances uh and disliked them intensely by Nature both Vanessa and I were explorers revolutionists reformers but our surroundings were at least 50 years behind the times father himself was a typical Victorian Virginia Wolf's whole political argument which had to do with the unfair treatment of women in British Society in the early 20th century was based on the fact that she didn't go to school and she didn't go to university she was burningly resentful of the fact that she was self-taught and that she didn't have an education like her brothers was I clever stupid good-look ugly passionate cold owing partly to the fact that I was never at school never competed in any way with children of my own age I have never been able to compare my gifts and defects with other people's she was very close particularly to her sister because both her brothers were sent away to school but she remained at home and her sister Vanessa very early on decided she wanted to be a painter and Virginia perhaps wanting also to have a root decided that she was to be the writer I think she probably started writing about the age of three uh and was writing nonstop and unstoppably uh all through her life from the minute she could hold a pencil until the day she walked in the river she was also a superb Artist as it turned out and one of the real phenomena of of that family was how these two daughters both came out as as highly significant artists her mother died when Virginia was 13 this was an absolute catastrophe in her life we had been sent up to the day Nursery after she died and were crying how that early morning picture has stayed with me the first uh serious bout of mental illness which Virginia wolf underwent happened soon after her mother's death um at the age of around 13 there was the moment of the puddle in the path when for no reason I could discover everything suddenly became unreal I was suspended I could not step across the puddle I tried to touch something the whole world became unreal then immediately her half sister to whom she was very close Stella died and then her father died I mean it is sort of Staggering succession of blows Virginia wolf had serious and very debilitating attacks of mental illness throughout her life they came at times of great stress she was visited by Voices she was incapable of getting up working or looking after herself and those voices for her were about it they were masculine voices they told her she was worthless they told her she was terrible she spent her whole life actually coming to terms with the deaths of her parents trying to prove herself to them there's a scene in a novel by Virginia wolf called Mrs dooway where the grown-up Mrs dooway imagines herself carrying her life in her arms as if it's a baby and walking towards her parents who are both dead in the novel and putting this thing down in front of them and saying this is my life this is what I've made of it and I always feel that's autobiographical and that that's what Virginia was always doing when she was writing she was proving herself to her dead parents and it was in 1904 that the Stevens Family Vanessa Virginia Toby Adrien moved from a Victorian house in Hy Park gate to Bloomsbury then an area that was not considered to be a good place to live they set up home there invited their friends and the place became a meeting point for artists writers intellectuals we were full of experiments and reforms we were going to do without table napkins we were going to paint to write to have coffee after dinner instead of tea at 9:00 everything was going to be new everything was going to be different everything was on trial Virginia's elder brother was called Toby Steven and he's a crucial character in the story of both Virginia Wolf's life and of Bloomsbury because when he left Cambridge he began holding atomes at their house in 46 Gordon Square in Bloomsbury and he invited his Cambridge friends to those events the Bloomsbury group was never a club it was just a collection of friends it consisted of Toby Steven and his serious young philosophical and literary friends from Cambridge uh who were mostly uh gay or bisexual Litt stry Duncan Grant and so on and they all sat around discussing the nature of good they were very aware that the victorians had placed a great deal of attention on on public life and these friends wanted to turn that kind of Investigation on personal lives private lives on the understanding that only if there was intellectual honesty close at hand could you hope to achieve it in the public sphere and in The Pursuit Of Truth conventions where they were mere conventions were there to be ignored to be torn up to be challenged if someone departs from a convention today people hardly raise an eyebrow but you could be damned in Vanessa and Virginia's day simply by you know a lack of an inch on the length of your skirt so in that setting they were very bold the Bloomsbury group was quite wonderfully omnisexual everybody had relations with everybody else a lot of people hated them regarded them as very exclusionary as uh Elite uh also as less cious and Memorial which is kind of fun to think about now the Steven family went on holiday to Greece in the summer of 1906 and while they were abroad both Toby and Vanessa fell ill Toby came back to London a little before the rest of the party he was thought to be getting better but suddenly he died he' contracted typhoid and his death had an extraordinary effect on these siblings because it drew them all suddenly that much closer Virginia and Vanessa and their brother Adrien were completely desolated by this death Vanessa's reaction was to get married to one of Toby's closest friends it was almost like a replacement Virginia lost her brother and she also as it were lost her sister pretty much at the same time and she was distraught absolutely distraught |
english_literature_lectures | Mark_Steel_on_Sylvia_Pankhurst.txt | [Music] it seems remarkable now that anyone would be so excited about getting the vote that they would dedicate their whole lives to securing it because most modern politicians I think you'll agree like yourself seem to be the embodiment of passionless soulless dullness do you agree with that so can we start what are we talking about most young people have so little interest in elections that they don't even know how they work I know this from when I stood in an election and I was giving out these leaflets and these two students came up and one went yeah safe man yeah I got to vote for you man and his mate went well you can't vote man you're only 17 and he said yeah I can get ran out I've got connections man so maybe things were different back in the days of the suffragettes or maybe the suffragettes were about much more than the vote Sylvia panker became a hero to thousands of the poorest people in the East End of London she was attacked by Lenin for being too left-wing and she ended up living in Ethiopia revered as a princess under King of the Rastafarian highly Cassity Sylvia pankus was born in Manchester in 1882 at a time when to most people in Authority the the idea of women voting was heresy for example the MP for Harford CW Radcliffe hook said I will oppose the right of women to vote until women are bigger than men which is fantastic logic so what about big women then can they have the vote I what about things that are bigger than men did they get the vote the Tory MP for Colchester EK cars Lake said the wife should be absolutely and entirely under the control of her husband she not Gad about and if she does her husband is entitled to lock her up Manchester was the most radical city in England at the time and two of the most prominent characters in these circles in the 1870s were emilyn and Dr Richard Pankhurst who supported causes such as the abolition of the workhouse and votes for women the panker had five children including Harry who died young then there was Christel and Sylvia now despite her liberal parents Sylvia remembered under the discipline of the servants being tied to the bed all day for refusing to take cod liver oil only the victorians could decide the punishment for not taking something to ease your joints is to strap your joints to Furniture during Sylvia's childhood the radical movement was transformed by a mass agitation for better working conditions by some of the poorest people in the country including women matchmakers in this building in East London who went on strike the women were especially annoyed because they' had their wages docked partly to pay for a statue to ex- prime minister gladston the strike had a huge impact in raising the status of workingclass women in the community now the factory has been turned into Loft style Apartments but the developers have made a special effort to preserve the history by making sure that each flat is roughly the size of a matchbox the strikes changed the Outlook of the panker family and they became involved in the newly formed independent labor party and it's important to remember that at that time people join that party in order to make it a radical campaigning organization whereas if anyone tried to do that with the modern labor party they might as well join the RAC and try to turn it into a radical campaigning breakdown service there was another effect of the independent labor party on Sylvia Pankhurst the leader of the new organization was Kia Hardy who is a passionate supporter of votes for women Hardy had been brought up in lanx Shire where he had to sleep on a dirt floor and started working in the pit at the age of 10 until one morning when he turned up late because he was looking after his dying brother and he was sack bastards K Hardy became the first independent labor party MP for the area of West Ham and one day when Sylvia came home from school she found K Hardy in the living room talking to her parents and later on she wrote about this meeting his eyes were two deep Wells of kindness like Mountain pools with the sunlight distilled I felt I could have rushed into his arms for several years she spent any time she could with Kier Hardy she'd help him write his speeches and in turn he'd read her the works of shell Byron and William Morris all this would have been scandalous for any unmarried couple at the time but Sylvia was 21 and Hardy was nearly 50 and married with Sylvia he could let down for a moment the granite image of the workingclass fighter and indulge his artistic side while she was attracted to the radicalis of a man that was untainted by the peculiarities of a middleclass upbringing writing about one of their days out together she said he would pick up little stones and play with them as children do you know I never played games as a child I said ah he said with infinite compassion and tenderness that is the matter with you you heard too much serious [Music] talk sorry then in 1900 the campaign for votes for women came together with the working-class movement in the Lanier cotton Mills when the Weavers launched their petition for the vote and then a group of people met in this room in the panker house to decide what to do next and that's when the women's social and political union was formed they decided to start off with some high-profile stunts good morning for example she took a petition and went banging on the prime minister's door at which point she was arrested and stunts like this attracted National coverage until the daily male called the women suff JS so they adopted that as their official title and Emil in particular became known as an impressive speaker especially in the poor areas in 1907 there were 400 meetings at which there were over a, people and the marches also got bigger until after one March in Hyde Park the times reported it is no exaggeration to say that the number of people present was the largest ever gathered together on one spot at one time in the history of the world the panker next tactic was to rush the House of Commons so emilyn and Christel were jailed the protest did eventually take place but an inspector informed the women that the Prime Minister wouldn't see them so emilyn punched him in the jaw and they got arrested for assault they should have had somebody doing an advert going I can throw stones and break windows but could I punch a copper Square on the jaw and get arrested I don't know if I could do that if you could join the suffragettes in the fighting that followed these arrests 108 more women were arrested so that night a group of suffragettes came to the home office with stones wrapped in brown paper and smashed all the windows from that moment onwards emilyn and Christel were committed to a strategy of Smashing things things and the theoretical basis behind this plan was summed up by the elderly suffragette who said I just want to go out into the street and smash smash smash everything Sylvia would take new recruits into country lanes where their first task would be to collect flints the right size for smashing and supporters were taken on window smashing classes emilyn announced the argument of the broken pain of glass is the most valuable argument in modern politics very whenever a suffragette was sent to prison they would go on Hunger Strike so the prison authorities responded by getting doctors to force feed them when Sylvia was arrested for smashing Windows six of them flung me back on my bed a man's hands were trying to force open my mouth and a steel instrument pressed around my gums feeling for gaps in my teeth when something gradually forced my jaws apart as they tried to get the tube down my throat [Music] Sylvia started to take issue with the movement it wasn't that she lacked courage as she'd been arrested 15 times and been on Hunger Strike more than any other suffer jet but she objected as she saw there was less emphasis now on involving lots of women and more on individual heroism and that way she was aware your supporters are left with not very much to do while a few heroes are treated like Saints from here the suffragettes went in opposite directions Sylvia spoke at strike ERS meetings whereas the other suffet leaders called for Strikers to be jailed Sylvia caused uproar on a tour of America by visiting the poorest immigrant communities and by agreeing to speak at a black University in Tennessee and to say why this had such an impact this is an example of a standard textbook used in English schools at that time the prosperity of the West Indies has declined since slavery was abolished a large population is lazy vicious and incapable of any serious Improvement a few bananas will sustain the life of of a negro he is quite happy and quite useless so she went off to form her own section of the suffragettes in East London and formed her own newspaper the women's dreadn she was arrested for organizing a mass booing of the Prime Minister outside Downing Street but she slipped away and then turned up in Disguise to speak at the meeting so the police surrounded the meeting arrested her and took her to prison where she went on Hunger Strike she was released but on the condition that she didn't make any more speeches so she dressed in Disguise again and made another speech this time when the police arrived her friends turned a hose on them and she escaped eventually she was arrested again was taken to prison again went on Hunger Strike again and this time when she was released there were even more conditions so she fled to Norway then the government introduced a new law that said If a woman in prison went on Hunger Strike they would be released and then rearrested again as soon as they'd had something to eat and to really take the piss they called this the cat and mouse act as Sylvia was becoming renowned for disappearing whenever she was a mouse she had to come up with ever more elaborate disguises to get around East London and at one meeting in B she said I reached the hall in Disguise and what a Triumph it was to be back among my people after 10 minutes the crowd cried jump Sylvia jump with arms outstretched so jump I did and she wrote an article declaring we have not yet made ourselves a match for the police and we have got to do it the police know jiujitsu I advise you to learn jiujitsu the reason for this was the increasing violence at the meetings for example at one meeting she said the table was flung to the ground and the chairs were smashed Mrs IES the hecky secretary was beaten with a trunion and she added men and women came to the meetings with sticks in their hands retaliating against the blows from the police I also began to see a weapon called a Saturday night made of T rope closely twisted and sometimes weighted with lead [Applause] [Music] even the anti-strike wing of the suffragettes extended its campaign of violence on one night in 1914 they burnt down three Scottish castles and famously they chained themselves to the railings of Buckingham Palace imagine the impact that must have had in those royalist reverential times whereas 80 years later there had probably been a woman looking out of that window going that will never get anywhere I threw myself down the stairs when I was pregnant with the air of the throne and nobody took any notice of that and most famous of all was Emily Davidson she had been a teacher but gave up to work full-time for the suffre Jets and once it is said she broke into the House of Commons and spent the whole night hiding somewhere around here in a Cupboard but nobody knows which cupboard and the story is probably just a myth she was arrested for trying to set fire to the post office in Parliament street and then in prison she went on hunger strike and barricaded herself into her cell and then one night in 1913 she laid a wreath on the Statue of Joon of Arc and the next day she went to the derby as the king's horse anma ran into view just down there Emily ran out of the crowd over the fence onto the course and under the king's horse which apart from anything else must have taken the most meticulous planning to know exactly where the right horse would be at exactly the right time she must have spent ages studying the for whereas today you could just go up to the jockey beforeand and go wait here's a score run us over will you mate Emily was knocked unconscious and later she died in hospital and the incident was noted by the king just as the horses were coming R tanham Corner a suffragette dashed out scandalous proceeding a disappointing day got home 515 and had tea in the garden so the government was under siege from the suffragettes from the unions and from the home rule movement in Ireland and these issues all came together when there was a general strike in Dublin the leader of the strikers Jim Lin spoke with Sylvia panker at the Albert Hall but then Christel who was living in Paris at the time summoned Sylvia and told her that she and the whole of a East London group were all expelled from the women's social and political union for supporting Lin and the strikers and when Sylvia complained that this was undemocratic Christel replied we do not want democracy here then there was a rail because Sylvia continued to call her group the East London Federation of suffragettes and in a precursor to the sort of rail that played groups like bucks Fizz the other panker declared that they were the only ones entitled to use the name soon all these issues were engulfed by one other the coming War the government announced that the Germans were raping nuns and bayonetting babies and were around every corner and this fever reached every section of society dog homes were full of daxon that had been abandoned because of their German name stau across Europe socialist organizations LED huge demonstrations in opposition to the war but on the day war broke out almost every one of them changed their mind and supported it instead slauter one group that became more fervent than almost anyone was emiline suffragette she announced that all suffragette action would cease because with that patriotism which has nerved women to endure endless torture we ardently desire that our country shall be victorious the war has made me feel how much there is of nobility in men the suffragette newspaper denounced a minister at the foreign office because he had a German uncle and syvia despaired as the rest of her family went around the country speaking at recruiting meetings for the Army and according to her they handed white feathers to every young man they encountered wearing civilian dress and they always assured Their audience that God was on their side of course he was God's always on your side in a war there has never as far as I know been a war in which a general has got up and said last night in this a time of need I prayed to God unfortunately it seems he's backing the Turks on this one one of a handful of individuals across Europe to announce their opposition to the war was Sylvia Pankhurst she she wrote that as I saw this clamor to war there was a cry within me stop all this breaking of Bones this mangling of men this making of widows so emilyn wrote her a letter I'm ashamed of where you stand on the War I only wish Harry was still alive so he could have gone and fought I wonder if she said oh I wish I was like Mrs Wickham over the road eight strapping young boys she had lost a lot of passionale oh I was jealous but apart from the Carnage the war also calls food ages forcing up prices so Sylvia turned her office into a cheap Cafe for the most desperate on a site which is now a pub with possibly the finest Pub sign in the whole of Britain she even set up a toy factory to give people work women whose husbands had gone away to war would come to work in this little building in a sort of anarchist profit share Collective and from here Sylvia set up marriages between local women and single soldiers so that the women could carry on getting an allowance but the difference between Sylvia and the old suffragettes was shown when she got one of her old suffragette comrades into help a lad who'd been on suffragette demonstrations came in destitute looking for help and the suffragette told him well why don't you enlist it was in the women's dreadn that se freed ton first made an anti-war statement and at one point the paper was selling 40,000 copies a week the women's dreadn called for mutinies in the Army at which point Sylvia was jailed for for 6 months for sedition in the summer of 1915 Sylvia received a letter from Kia Hardy which began worryingly dear syfia in which he told her that he was so ill he didn't expect to last a week a few days later while speaking on a demonstration against conscription she noticed a newspaper headline Kia Hardy dead following this despair she became ecstatic when news reached her that Lenin and the buvik had taken power in the Russian Revolution and captured the Zar ladies and gentlemen we got him Y come on get at this point she changed her paper's name to the workers dreadn the government sent arms to the forces fighting against the Russian Revolution but Dockers in the East End of London refused to load them the river's joint shop stewards movement organized this campaign and Sylvia kept him continuously supplied with Lenin's appeal to the working masses which was printed illegally communist Harry pollet said my land lady in pop expressed surprise that my mattress seemed to vary in size from day to day she little knew that inside our mattress we kept our copies of Lenin's appeal Sylvia was invited to attend a socialist conference in stutgart but she didn't have a Visa so she had to slip out of the country in Disguise and go to Italy then she traveled along goat paths to get into Switzerland finally reaching Germany having crossed the Alps on foot she traveled to Russia on a tiny Norwegian fishing boat as a stowaway without a passport across the Arctic C when she got to Russia she was hugely impressed with the revolution but she had an argument with Lenin insisting the British Communist Party should have nothing to do with elections to a parliament Lenny wrote a book arguing against Sylvia pankhurst's stance called left-wing communism and infantile disorder that's cool isn't it to have Lenin going trouble with you is you're too bloody left wing be like sitting in a pub with George best and him going I'm going home you're just being silly now mate Lenin insisted that the Communist parties of Europe should participate in elections and wherever possible they should join the labor party Sylvia derided the idea of joining the labor party and of standing in elections at all saying that the Communists should instead be encouraging power to pass to the local communities so having spent her whole life campaigning for the vote now she was saying there was no point in anybody vot him so instead of joining the newly formed Communist Party Sylvia and a few supporters went off to form their own party and over the next few years the workers dreadn became increasingly hostile to the Russian Revolution but it did run a series of lessons for Esperanto as a way of combating nationalism then the subtitle of the paper for international communism was dropped and replaced with new ones such as for Clear thoughts and plain language and the happy are always good she might as well have had workers dread not because I'm worth it ironically as the panker were at war with themselves the government Was preparing to back down following the war it seemed inconceivable that soldiers who fought the War should then be denied the vote so a bill was proposed to extend the votes to all adult men but then as the men had gone off to fight 1 and a half million women had taken their place in the factories so it also seemed ridiculous to deny them the vote as they had clearly taken on the traditional men's roles so the vote was granted to all women over the age of 30 and one of the first women to stand for election to Parliament was Emily Pankhurst for the women's party campaigning for policies such as women wearing less lipstick Christel went even more peculiar she became an Adventist and predicted that Europe was about to enter an age of dictators and earthquakes which would end with the second coming why do people feel the need to join these sorts of religions are they in church listening to stories about our God made woman out of a rib and parted the sea and turned Lot's wife into a pillar of salt while they sit there thinking trouble with this religion it's not mad enough for me2 B emilyn became the Parliamentary candidate for White Chapel backed by the conservatives while several of the most prominent suff Jetts went on to work for the British Union of fascists which must have been quite handy they could have given the SS tips on breaking windows in contrast Sylvia fell in love with an exiled Italian anarchist who worked on the workers dreadnots called Sylvio Coro this has to be one of the biggest family Rifts of all time Sylvia and Sylvio moved to a house on this site that they called Red Cottage in the Suburban area of Woodford in Essex what a fantastic thing to do for no other reason than to annoy everybody on the neighborhood watch ski I love the idea of suburban anarchists but relations between the panker reached a new low when at the age of 45 Sylvia became an unmarried mother she clearly took Delight in the annoyance this course of the conservative wing of the family especially as she sold the story to the news of the world from the obscurity in which she has lived since the memorable days of the militant saffr jetes Miss Sylvia panker Springs a new sensation perhaps it was a coincidence but a few weeks later emiline collapsed and died and even that didn't stop the antagonisms at the memorial service one of the speakers was Stanley Baldwin leader of the conservatives and Sylvia was excluded from the arrangements and then came an extraordinary chapter in the history of the women's movement which began when musolini invaded Ethiopia in 1936 now Sylvia had been one of the few British socialists of her generation to oppose the philosophy of Empire Harry quelch one of the leading early figures in the labor movement had written Zulus belong to a different evolutionary EP po it would be better if they all stayed in their own countries Sylvia led the campaign in Britain to impose sanctions against the Italians and travel to Ethiopia to offer support to the extent that officials at the British Embassy wrote this confounded Pankhurst woman is more fuzzy wuzzy than the fuzzy wuzzies the Ethiopian Emperor Hy Cassi fled to Britain living in B where he became friends with Sylvia and he would spend his holidays in Worthing which I find extrem ordinary I wonder if there's footage of him going up to a policeman and saying excuse we lose once the second world war started Britain was at war with Italy so they sent the force to Ethiopia to drive musolini out but then they occupied the place themselves so Sylvia continued to campaign for Independence to the extent that Churchill kept a file called how to answer Letters From Miss Sylvia Banker by the end of the war highly he had been reinstated as Emperor and he invited Sylvia to visit as his advisor on policies for women you can't help thinking that when the king of the Rastafarians invites one of the Century's most famous feminists to advise him on policies for women there must be some awkward moments such as um this pit here about women being unclean during their periods anything wrong with that one of Sylvia's campaigns was against the BBC each night during the war the Home Service would play the national anthems of the Allied countries but not that of Ethiopia so she led a campaign that forced the BBC to add it to the nightly Anem Burn Street burn your Sylvia went to Addis abar and was appalled to find that a ban on blacks entering certain areas was still being operated by the British so from a Suburban home she kept on campaigning and when a new constitution was agreed ending the occupation and granting the vote to everyone over 21 she said the victory of Ethiopia is the most satisfactory achievement I have seen at the age of 77 her beloved Sylvio died after which Sylvia wept for days Sylvia and sylvio's 30 years here is marked only by this pacifist Monument that they erected themselves in 1935 and as you can see the good people of Woodford tend to it lovingly on a daily basis with wax teacup polish varnish window lean nothing's too much trouble when Sylvia received an invitation to work with the emperor H Cassi she went with her son to live in Ethiopia and then in 1960 at the age of 78 she died H Cassi flew to Adis abar to order a state funeral at which he stood for the whole two hours and across the East End of London loads of BLS stood by the side of the road going whatever they say about silver she never threw bricks at her own now a large part of of the women's movement claims that Jordan is a role model or that Diana was a modern feminist some even say she was a republican though given that her main ambition was for her son to become king I'd suggest that puts her on the moderate wing of the Republican movement and politicians blame the ever decreasing number of people who bother to vote on apathy but how can it be apathy given that we've recently seen the largest demonstrations in British history that's not apathy it's because fewer and fewer people feel any connction between themselves and the politicians it's like if Cliff Richard was doing a concert three doors from my house I wouldn't go that wouldn't be apathy I wouldn't be sat at home going oh we'll be doing bachelor boy in a minute but I can't be bothered it's willful nonparticipation Sylvia panker understood that the vote was worth campaigning for because it raised the status of women it was a victory against the attitudes of those who would never allow women a say until they were bigger than men however bizarre a final day Sylvia pkers lasted the course that so few manag never for a minute embodying passionless dullness throughout her entire life she was Intrepid spirited and interested oh what's the point what's the point of coming here [Music] the chairman of Bad Hair Days on BBC 2 Donald Trump puts The Apprentice USA in the firing line next fight for your right to participate in the mark steel lectures the website is at open two.net [Music] |
english_literature_lectures | Harold_Bloom_on_Shakespeare.txt | large controversial opinionated disputatious fotu isms defender of the aesthetic and cognitive standards in the profession maintainer of canonical standards for the study and appreciation of L literature original inimitable intractable the King Kong of criticism full of Zumba hardly the mildest of men as claims to be Harold boom has dominated literary criticism for our time there is no one like him as one critic says bloom bloom he takes up the room he's known all the world from Maine to cartoon I'll end with a line and a half from Shakespeare as seems appropriate why man he doth bestride the narrow world like a Colossus please join me in welcoming as a culminating leing lecture in Shakespeare at Yale Carol Bloom hinr kind of remarked that there is a God and his name is Aristophanes I revise that to there is no God but God and his name is William Shakespeare Samuel Johnson told us that the essence of poetry is invention following that Sublime critic I called an endless book I'm sorry about Shakespeare the invention of the human 14 years later I'm still ched for confusing The Bard with Thomas Alva Edison following Johnson my Trope suggested that Shakespeare the essential poetic dramatist had revealed to us much that always had been there but had not been available before he discerned it his recognition of the human was an act of literary knowledge a mode concerning which we still comprehend rather a little Falstaff Hamlet and Cleopatra are transactions in knowledge so are you and I the ancient Greek word for word logos in its root means a gathering together the Hebrew davar for word a word that is also an act and a thing derives from a root meaning to thrust forward something that previously was held well back in the self when I listen to fall staff or to Hamlet I hear that bringing forward what we learn from Shakespeare's most vital men and women is the knowledge they incarnate this is not the knowledge of the philosophers or of the mystics or even the knowledge of Homer or Dante it is unique to Shakespeare though montain and cantes his greatest contemporaries are closer to it than anyone else before or since the self-awareness of f staff Hamlet Iago and Cleopatra is the San quality that renders them endless to meditation ours and their own if their vitalism is elusory why then so is yours and mine for we are their descendants much of our self-consciousness had its Inception in their selfawareness what we cannot catch up to is the amazing tempo of their words thoughts acts that pronatural quickness emanates from their playfulness ludic intensity renders the purposes of playing purposeless confronting us with mtic energies we scarcely apprehend let alone absorb reread the tragedy of Hamlet Prince of Denmark and Center upon the Thousand lines from act 2 scene 2 through act 3 scene 2 a quarter of the play in its uncut composite length what you confront in that whirly gig of Wonders only rarely is it the imitation of an action or any other aspect of theatrical illusion plays within plays soliloquies theatrical gossip lectures on acting crowd upon you until you do begin to feel that Hamlet somehow is an actual person intruding into a dress rehearsal for an Unwritten drama that scarcely could be shaved saage really even if it were achieved pendello bre Samuel Becket whoever you will could not match this Kaleidoscope of a theater of mind so capacious we still cannot Encompass it Hamlet palpably is an experimental thinker rather more than he is Shakespeare's thought experiment his seven soliloquies break the process of the discursive in ways that prompted nii's apathe that for which we can find words is something already dead in our hearts there's always a kind of contempt in the act of speaking I can think of nothing more alien to the magnificence of John fallstaff who excels even Prince hamlet in starting fresh meanings rather than repeating old ones false sta finds Avalanches of words for what lives fiercely in his heart and always he glories in the act of speaking few others in all of imaginative literature speak so superbly Alou Hast damnable iteration and art indeed able to corrupt the saint thou Hast done much harm upon me how God forgive thee for it before I knew thee how I knew nothing and now am I if a man should speak truly little better than one of the wicked I must give over this life and I will give it over by the Lord and I do not I a villain I'll be damned for never King's son in Christendom before I knew Thee I knew nothing addressed by the fat Knight to the prince is unanswerable and sublimely outrageous sometimes I Muse that the two oldest persons in Shakespeare are Le and F staff who have in common only their age which at 81 iar happily resembling in temperment F staff and not the tragic monarch knowing fall staff is more than entertainment though precisely that was Shakespeare's first Grand Triumph with his audience hudur says of FAL staff and his irregular humorists that they daed the world aside and bid it pass you know actually my dears I'm going to transfer myself at this point to the table hutber says a false stuff and his irregular humorous that they daff the world aside that is say bitter asde we too should thrust the most he fix Scholars aside and let them pass so as properly to apprehend saan and learn to know what he knows which is what he is his absolute sense of being vola is the Socrates of e and he teaches us to be in the difficulty of what it is to be his best student The Ungrateful Prince how absorbs the lesson and intends to hang the instructor or in any case see him cut down upon the field of battle as life itself Principle as as well as particle H staff declines to be slain he will waste away and he will die of a broken heart but that scarcely diminishes his ontological self he teaches all things in himself wit exuberance Defiance of time but above everything else the sheer Joy of being of being a human being more even than chos is the wife of baath the wonderful pan urge of Ral Sancho Panza he manifests the blessing the great Hebrew blessing which I've always translated as more life long ago I wearied of being told that Hamlet and F staff are men made out of words in more than 80 years of countless friendships and alas enmities I've encountered no one Among Us half so real or so intelligent as The Prince and Sir John if to be a Shakespeare scholar entails the denying or evading his creation of character and of human personality then I'm pleased to be merely a reader literary knowledge is not a shadow of our failure to know one another but the larger form of what that relation yet might be darkest strains reverberate when we realize we are uncertain our self- knowledge equals that of fala and of Hamlet which can cause distress since their catastrophes come precisely from their greatest human gifts years ago in London I lunched a number of times with Owen Barfield wonderful profound student of poetic thought I remember our final meeting when Owen asked me Harold does it not cause you shagrin when you reflect that your emotions originally were Shakespeare's thoughts pondering this I replied we had become his characters reflection also of course Ralph Waldo Emerson's when he he observed that Shakespeare had composed the text of Modern Life wienstein strongly dissented he ironized that Shakespeare was too English and much more creator of language than of thought or of character David Yume would have agreed with wienstein but I prefer Hegel with his very fine perception that F staff and Hamlet Iago and Cleopatra are free artists of themselves Shakespeare endows them with the capacity to recreate their souls each of them his or her own Demi urge the fiercest of Demi urges this once would have been the house in which to utter the Great Name Yahweh who with Jesus and Hamlet makes up a Triad of the West major literary characters inaugurates the particular metaphor of being that Shakespeare evades and subverts in his most vitalizing characters in Exodus 3 yahwah calls his reluctant Prophet Moses so as to send that slow of speech Shepherd the Hebrew is obscure it could mean either that Moses stutters or that he stammers I suspect he's a stammerer so as to send that slow of speech Shepherd down to Egypt to lead the supposedly chosen people back to Canan extraordinary text which I quote from King James and Moses said unto God God behold when I come unto the children of Israel and shall say unto them the god of your fathers has sent me unto you and they shall say to me what is his name what shall I say unto them and God said unto Moses I am that I am and he said thus shalt thou say unto the children of Israel I am have sent me unto you the Geneva Bible inaugurated this interesting mistake of I am that I am the great Protestant Mar William tindel the greatest of the English Bible translators watch much closer to the Hebrew Aya Asha Aya and having Yahweh say I will be what I will be as I read the Hebrew it means I bring into being what I bring into being puning outrageously on his own permanently mysterious name Yahweh States the myth of presence I will be present whenever and wherever I choose to be present which of course implies he will be absent whenever he chooses to be I recall saying in some book or other of yahwah I don't like him I don't trust him I wish he would go away but he won't Shakespeare's own dialectic is a holy secularized shuttle of what I suppose you might call the real presence and the real absence teaching Shakespeare you teach presence a teaching that enacts a reading of his ellipses that is to say no other writer has been anywhere near so skilled of the art of leaving things out though everything in the tragical history of Hamlet Prince of Denmark is questioned and questionable as we will see the darker enigmas are unmentioned and I always wonder why Shakespeare Scholars don't attend to these things when did the erotic relation ship between Claudius and Gertrude begin who is Hamlet's phallic father the warrior king or his shuffling brother more than any other Shakespearean protagonist Hamlet does not mean what he says or say what he means if we suspect that Claudius may be his father can Hamlet more intelligent than we all suspect less why does he return from the sea that Elenor where every first thought must be his impending death in a drama uniquely and openly aware of its audience we are compelled to complicity with Hamlet who pragmatically is an agent of death unlike F staff always to be sublimely praised because FAL staff is life's ambassador to us Hamlet speaks 1500 lines of what in composite editions is a play of 4,000 lines much Shakespeare's longest when not on stage Hamlet's absence is a presence as there can be no other Focus the drama is his passion and his mystery as the gospel of Mark was that of Jesus except that Hamlet is unfathered as James Joyce first suggested a play that takes as its burden the meaning of self-consciousness May hint that inner Freedom can be attained only when the protagonist can separate his genius for expanding consciousness from his own dangerous passion for sheia theatricality for staff apotheosis of self-presence enhances his freedom by playing out the play both Hamlet and Falstaff are great improvisers if finally I go with f staff it is because he goes into bottle with a bottle of sherry in his holster for swearing A Mir pistol dodging the bottle when the outraged Prince how throws it at him the subl FAL staff States a zestful truth I like not such grinning honor as s Walter hath give me life which if I can save so if not honor comes unlooked for and there's an end I've heard Shakespeare's Scholars say to me face to face that they actually prefer Hots Spurs doomsday is near die all die merrily and yet Hotspur Delights us because highp spiritedly he loves his life on his own terms Hamlet has no love for life no love for himself no love indeed for anyone else be it ailia or Horatio as for the Absurd pseudo forian readings of the play When the dying ger cries out oh my dear Hamlet his response will be wretched Queen ad do we do not receive falstaff's dying words in mistress quy's wonderful Cockney Pros elery for him and Henry V but she vividly presents the scene the great vitalist is a little child again playing with flowers smiling at his fingertips and singing the 23rd psalm pure presence hardly could be more enhanced Shakespeare wisely avoided a final utterance from the undying FAL staff how I would wince if he departed murmuring the rest is silence so capacious is Shakespeare's effect upon us that we have no secular similitudes to be offered as Alternatives the strongest of his protagonists constitute a facticity an entire world that contains us how can you or I achieve perspective upon a primordial poem of mankind that issued incredibly from a single creative mind we are inside Shakespeare's imaginings and therefore we scramble to see him with any lens is not his own I no longer go to suffer the play staged because I am too old to sustain yet more exasperation at high concept directors who assume they can think Beyond him what results are caricatures travesties noise I cannot delude myself that I am more intelligent than Hamlet or F staff here is the most famous Soliloquy in the language the Black Prince's ontological meditation staled by repetition only if you do not strive to think through it with Hamlet let us think it through together while forgiving my now broken old voice and indeed it is cracked and broken like false STS ignore the punctuation which is not Shakespeare's anyway and be aware that hamlets assertions always Verge upon being questions as I say I cannot read this as the great s John Gil good could recite it I can read it only as an interpreter to be or not to be that is the question whether is nobler in the mind to suffer the slings and arrows of Outrageous Fortune or to take arms against a sea of troubles and by opposing end them to die to sleep S no more and by asleep to say we end the heartache and the Thousand natural shocks that flesh is here to is's a consummation devoutly to be wished to die to sleep to sleep the chance to dream I there's the rub for in that sleep of death What Dreams May Come when we have shuffled off this Mortal coil must give us pause there's the respect that makes Calamity of so long life for who would bear the whips and scorns of time the oppresses wrong the proud man's conly the pangs of disprized love the laws delay the insolence of office and the spurns that patient Merit of the Unworthy takes when he himself might his qutis make with a bear botkin who would fos bear Grunt and sweat under a weary life but that the dread of something after death the Undiscovered Country from who born no traveler returns puzzles the will and makes us rather bear those ills we have than Li to others that we know not of thus conscience does make cowards of us all and thus the native Hue of resolution is sickled all with the pale cast of thought and Enterprises of great pitch and moment with this regard the occurrence turn arai and lose the name of action this absolutely is not at all what it purports to be it is not a Ry contemplating self-slaughter since Hamlet greatest divir again does not mean what he says or say what he means a psychic double dealer he Broods on the abscess of being as his own mode of Consciousness that sea of troubles will be transmuted by Milton into the great phrase a universe of death that the heroic Satan must explore on route to the new world of Eden Hamlet Western Hero Of Consciousness and not of conscience now invents what the Roman what to call the power of the poet's Mind Over All outward forces soldiers of de's Cosmos night death the mother and the Sea wal Whitman's four-fold metaphor for his unknowable soul soul will undo being and endc conscience which at once is awareness and what James Joyce called agonite of inwit Hamlet gives presence or being two choices only suffer like like a stoic or else outrageously take arms against the ocean whose heightening pitch must consume Us in its currents since our opposition cannot hope to quell them the consummation however devoutly desired must conclude in consum mum EST the final words from the cross of being but why does the prince so beautifully call death the Undiscovered Country from whose born no traveler Returns the ghost of King Hamlet brutal and malevolence has returned and once again Hamlet does not mean what he says what shall we know of whatever it is that truly he means the puzzled will must be the center of this fresh creation of meaning this new birth of poetic knowledge for it is the will in Shakespeare that overhears itself and proceeds to will change will Shakespeare plays on his name in the sonnets and he Rings changes yet more profound upon it in the tragical history of Hamlet will in shakespare is desire and the Consciousness that is one of the pits terrifyingly says desire is death how can the will overhere itself the will to change confronts at last the final form of change death that distinctly is not the falstaffian will which goes on unsettling what passes for Shakespeare's Scholars even as it exasperates their hero the brutal and conniving Prince how who becomes Henry V what better time to just and Del than in the senseless Butchery on battlefields Hamlet Jess and Del's in the graveyard but beneath the he will not say how much he loved uck if only because he really has never loved anyone else himself included where are we to find the meanings of hamlets words FAL sta is how meaning gets started while Hamlet is How It Ends by ulating the will I do not suggest that Shakespeare takes sides between the two because his capaciousness en wombs them both you and I are Shakespeare's objects we are the children of his will his perspectivism is so dumbfounding that we cannot know whether or why we ourselves must choose do we care whether fresh meaning can get started in or for ourselves I'm a bad sleep when I find myself asking myself that four or five times a night Shakespeare's influence on Shakespeare drove him to an augmenting elliptical style of thought and rhetoric falstaff's sister Cleopatra George Bernard Shaw pioneered in despising them both but fortunately no one has followed him in that regard but then sha once actually wrote when I consider the mind of William Shakespeare and compare it to my own I can only feel pity for him F staff sister Cleopatra meets her match not on the noble ruin of antthony but in the clown de's Emissary who carries in his basket the pretty worm of nylas that kills and pains not and again my wretched voice can't Encompass this but I need it the clown undoubtedly played by the great later clown and fall played the fall in Lear and festy on 12th night and so on um Robert Armen look you the worm is not to be trusted but in the keeping of wise people for indeed there is no goodness in the worm Cleopatra take thou no care it shall be heated very good give it nothing I pray you for it is not worth the feeding and Cleopatra will it eat me I love that clown you must not think I am so simple but I know the Devil Himself will not eat a woman I know that a woman is a dish for the gods if the devil dress her not but truly these same whome Devils do the gods great harm and their women for in every 10 that they make the Devils more five Cleopatra well get thee gone farewell clown yes for sooth I wish you Joy of the worm even in Shakespeare there is nothing else quite like Cleopatra's sudden return to childhood fantasy I hear a little girl and not the old Serpent of the now in would eat me the clown Charmed by her as we all have to be conceals in his populist misogyny our genuine distress and indeed his own so magnificent a woman should slay herself except for false staff and Hamlet no other death and Shakespeare divests us of so much life is so large a withdrawal of being tragedy in Shakespeare turns upon a loss in being that threatens to empty us of meaning we gain knowledge at the expense of life and the hazards of nihilism Shakespeare in my experience is the height of literary knowledge but what is the fate of such knowledge nothing is got for nothing the price of Love increases with aging because more of those you loved are now among the dead than among the living to know fewer people without the finer tone of knowledge which is after all human love is a poverty that redefines imaginative need Divinity for almost all of us no matter how we try to deceive ourselves and others is an affair only of Silent Shadow and of dreams I study shakespere with great diligence hoping to continue acquaintance with the everliving his women and his men what is it I know when my knowledge of them augments or at least remains constant Shakespeare's own invented word for the identity of any man or woman is the self-same that's all one word a deep Paradox since he is much the most metaphorical of all writers Beyond even aent perhaps montain comes closest but Montaine presents only the one magnificent identity himself notoriously will Shakespeare has virtually no identity whether in the sonets or in the plays Hamet has a dozen identities and scores of modulations within him there are more roles in him than there are great actors to perform such immense intricacies what EV literary knowledge is it begins and ends with Shakespeare except that there is no end to Shakespeare I once believe poetic knowledge could be regarded as figurative thinking but now that only seems to me one more evasion nii defined the motive for metaphor as our desire to be different the desire to be elsewhere but I think literary knowledge is a larger response to desire and now longing for a larger self seeking father Shores the Jesus of Mark's gospel says he does not know who he is and he keeps asking his thickheaded disciples but who do people say I am Hamlet the secular Christ as unamuno thought donkey Hood to be like Cante sorrowful Knight knows exactly who he is and who he's going to be if he chooses our sense of who we are and of what we might be owes everything to Shakespeare hisus is old and new descent from that but there is no history only biography and biography from Johnson and Boswell on is a Shakespearean mode literary knowledge is in the first place knowledge of literature and after that something else for want of another term I Nam this added realm Shakespearean Consciousness that vaulting Bridges the gap between Hamlet's ever growing inwardness and outward show a pageant at once celebration and lit we are at a festival of knowledge since Hamlet knows more than we know still we lament has not divulging more and we wonder if we resemble those poor players Rosen crans and Gilden Stern rather than Hamlet many of us should winse when the prince tries Gilden Stern and again by now my voice is completely gone I can only read this interpretively why look you now how unworthy a thing you make of me you would play upon me you would seem to know my stops you would pluck out the heart of my mystery you would sell me from my lowest note to the top of my compass and there is much music excellent voice in this little organ yet cannot you make it speak so blood do you think I am easier to be played on than a pipe call me what instruments you will though you fret me you cannot play a upon me Hamlet like Shakespeare composes a drama rehearsing the mode of drama he writes I know that's so insan a sentence I will read it again slowly I worked it over but I didn't find any way of putting it more simply Hamlet like Shakespeare composes a drama rehearsing the mode of drama he writes bewilderingly brilliant this propels us into a kind of imaginal knowledge which there are simply no ground rules stage playing is reinvented before our eyes startling our ears as it is in those wonderful Tavern skits improvised by F staff and Prince how literary knowledge perhaps can entirely be regarded as one more play within a play daunting out pry or awareness how can each of us cast off the shadow of self while continuing the expansion of our own Consciousness at 81 I no longer read to aage Lon us early in life I thought we read so many books because we could not know enough people that seems absolutely wrong now not even montine or Shakespeare santes or PR suffice when we Pine for an absent friend or mourn the dead without the presence of another highly valued being the self's shadow occludes any heightening of awareness still we must take our condition as given Solitude to some degree augments with aging you begin to write as I'm now speaking in the style of old age more surprisingly I think the solitary reader drifts into a new mode of reading in which the momentous world of Shakespeare refigures one's prior sense of spatial relation in regard to Shakespearean personality and characters you go from containing fala Hamlet Cleopatra to being contained by them as though they were the echo chamber your spirit inhabited in the atmosphere of solitude the spirit Withers gloriously a sorrow I associate with the stency of Nichi sarra crying out but do I bid you be either plant or Phantom Shak spiran knowledge cancels that clo in fiction false stuff is as much Quicksilver Spirit as sagging flesh Hamlet the apotheosis of Mind theatricality lamented his salid solid or solid flesh will not will to Thor dissolve itself into Dawn moistur Rings Orson wells in a letter sublimely suggested that Hamlet sensibly reached and then stayed in England rejecting Elenor slaughter house and happily grew old and fat aging into SN fala the Mercurial Orson thinking his way figuratively into conversations with Falstaff and his Splendid film chimus at midnight gave us the only portrait of the giant wit that could approach Rafe Richardson's definitive stage enactment which I saw at 16 and hold fast by 2third of a century later I return to the ultimate question what is Shakespearean knowledge Johan Batista Deo whose poets were hom Virgil and rigorously de apologized uh I tripped over that a Dante with the theology removed grounded all knowing and the true poets who repopulated the Earth with giant forms by the vioni and test Shakespeare is the truest poet the knowledge Shakespeare gives us is not language Contra vien Stein but diction the choice of words all the Shakespearean gifts cognitive figurative inventive of personalities depend upon his total Supremacy IND diction Ariel's songs in the Tempest are illustrative those are pearls that were his eyes nothing of him that doth fade but does suffer a change into something rich and strange the litany rings on pearls were eyes and then on nothing fade suffer sea change to culminate magnificently with the rich and strange that are undesignated diction opens to the realm of the will to change the lesson of the master inevitability of phrasing is an agon a struggle in which only Dante among all Western poets Rivals Shakespeare after Dante the Italian literary language had to be his initially highly personal Tuscan originated by kavak kante but then greatly surpassed in the Comedia after Shakespeare English literary language gradually unfolds into something holy his in what ways intimate ever early candid self-reflective do Hamlet F staff Cleopatra render us a strange newness in meaning repetition mere repetition consigns meaning to the rubbish Heap only this thought of fresh meaning hurts us enough to be memorable Walt Whitman fought against memory in his leaves until Hospital service among the dead and dying broke him until then Walt was a Mythic Cosmos a kind of hermetic ploma the fullness of powers that granted a fls the Divine man who inhabits Song of Myself there is a fullness of being in FAL staff and in Cleopatra and a ruining of such being in Hamlet Who would wear out any Cosmos whatsoever unlike Cleopatra and Fala Hamlet gives us the illusion that he EX neither in space nor in time it is as though his astonished inwardness represents a reality that has priority over every temporal and spatial division of human existence I never quite agree with two very gifted late friends Anthony nle and Francis Yates in their linkage of Shakespeare to the Hermetic fantasies of Robert flood and Dr John D and yet Hamlet illuminates the Hermetic image of the Divine man falling outwards and downwards into our Abyss much more fully than any esoteric tradition can hope to clarify the black prince what is foundational for the Hermetic Corpus is only another unpacking of the Heart with words words for Hamlet so capacious is Shakespeare's project that religious speculations however heterodox cannot infold to mind finally self- purified a theatric ISM the mind of Hamlet what then is the scope of literary knowledge of Hamlet Falla Cleopatra stand at the very center of it Hamlet a kaleidoscope whirls his Wonder wounded Heroes into a cosmological quest absurdly too momentous for the rotten CT of Elenor fall staff regaled with Sherry sack and D teet challenges your perspectivism to moralist S John is a cowardly buffoon to you whoever you are he ought to be the true image of life itself Cleopatra to do eyes is an aged to those who can discern she personifies heroic Aros does Shakespeare care how you choose on not knowing that is the clue to literary knowledge and this most comprehensive of all knowers in part only we come to know what he knows Hamlet defies augury and again forgive my voice if it be it is not to come if it be not to come it will be now if it be not now yet it will come the Readiness is all willing to die is a shade only from willing death false staff will have none of it or of old age though he knows well enough that both soon will have him his great enemy the Lord chief justice chides him is not your voice broke your wind short and every part about you blasted with Antiquity to Which F staff who loves laughter even more than he loves himself or Prince how who loves laughter Above All Things FAL staff greatly replies I think sometimes this is my favorite moment and all of false stuff my Lord I was born about 3 of the clock in the afternoon with a white head and something a round belly for my voice I have lost it with hallowing and singing of anthems crying out Praises of yahwah is hardly the false stfi in mode though on his deathbed he will sing Psalms I know him better than I know Hamlet for who can identify with Hamlet all of us have known perhaps two or three women who in life could assume Cleopatra's garments give me my robe put on my crown I have Immortal longings in me what whatever else they may be the Triad of fall staff Hamlet Cleopatra are not among Shakespeare's Falls of time as so many of his protagonists have to become it is a scholarly commonplace that the Renaissance fiercely augmented the classical drive to transcend time with art outlasting arrows Elizabeth's Fable Chastity manipulated into political power became an image of temporal constancy always the same the great Queen resisted only the final form of change death literary knowledge however is of and in time and cannot exist without consciousness of temporality of it as the first instance of overt literary knowledge that I can recognize and Christopher Marlo more than Shakespeare seems closer to Avid in an obsessive anxiety that forcers a specifically literary knowledge The Tempest which in my own reading is a final overcoming of Kit Marlo deliberately dwarfing his doctor Fus is the farthest experience of literary knowledge available to us literary knowing is an event wherein our own acquaintance with the known is self-reflexive and that the illusion glares back at us when endlessly I reread and teach yet once more the tragedy of King Lear the know I gain primarily is what Only The Uncanny fool knows on that great stage of Falls we cry The Cry of the human as though we are the newborn but we fall into time theater and not into hamlets or fall staff's theater of Mind Le's full half a changling Child Half wise Beyond wisdom knows that he and all the others on that stage inhabit with the ancient gnostics regarded as a cosmological Abyss enduring Beyond a false creation ruined by its capricious rushing into being falling into time we abandon a better knowledge for empirical Caprices yet that is the best we can achieve unless the richest node of literary knowledge can be attained PR much the most Shakespearean writer of his own time recovered knowledge in the privileged moment that partly redeemed time Samuel Beckett meditating on prce from a joyan perspective reminds us of the link through John rusin to the high romantic spots of time secularized epiphanies the Will's revenge against time and times it was a failed Nan Quest brilliantly is isolated by PR as sexual jealousy restored to The Shakespearean intensity of oel and of Leones in the Winter's taale The prian Comedy of sexual jealousy plays against Shakespeare's darker view of it but both augmental understanding of literary knowledge in proce the jealous lover is an art historian searching for every Visual and temporal evidence of infidelity as Falls of time we are all of us agonized lovers fantasizing fictions of duration that if they are jealous enough become our own bad poems or stories if I had to choose one character only as a guide to literary knowledge it would of course be sjan Falstaff rather than Hamlet or Cleopatra Dante the pilgrim or Don keot prce narrator or Leopold Bloom for me the question what is literary knowledge is answered by F staff why he need not quest for lost time since he triumphantly has thrust time aside and bid it P he is not human All Too Human as academic moralists tiresomely repeat to me he is the true and perfect image of life itself turn him and turn him for everything is in him thank [Applause] you |
english_literature_lectures | Literature_Discussion_Ilja_Wachs_on_the_19th_Century_Novel.txt | I just want to take a minute to exercise my contractual right to embarrass Ileana notices personal thank you all for coming to the first talk in our series in which members of the literature faculty will be speaking to the community of writers and Sarah Lawrence and although I don't think he's here I want to thank a member of the Graduate writing programs dude in the Graduate writing program David Chiu Mandel over first suggested the idea of experience our speaker today will go will be Ilya wax who's been teaching at Sarah Lawrence since 1965 and who's the author of a study of the novels of Charles Dickens called Dickens the orphaned condition I want to mention that that I was a student here in the 70s and I studied with them and I remember thinking beforehand that studying with him would be good for my writing and I remember the disappointment that I felt a few weeks into this course when I started to feel as if it wasn't good with my writing at all he had nothing to say about literary craft as I understood he had nothing to say about the family this dialogue decision you had nothing to say about the use of the third person point of view so when in third person point of view which rewriting students went weren't yet sophisticated enough to refer to as the POV as he took me a while to see that things of much greater consequence and Miller was reading the words with such seriousness and intensity and playfulness that at first it came to be clear to me that he was giving us an education in why literature matters and by the end of the year I felt that he had given us an education I know that Warren's such a special teacher I remember the passage for Warren's please forget warnings for saying man he says the critic must be able to feel the impact of a work of art in all its complexity and his force to do so he must be a man of complexity and force himself but someone was emotionally educated in this way he's as Raritan Phoenix more than this Lawrence said even an artistically and emotionally educated man must be a man of good faith a critic must be emotionally alive in every fiber and then morally very honest I feel so lucky to his studies I think after that introduction I should just leave I feel very lucky to have Bryan here now as a colleague I mean there's something quite wonderful in having people who you've taught many many years ago come back and be your colleagues there's there's a just a very moving sense of continuity that you get from that and Bryan is a wonderful teacher and a splendid writer and a terrific friend so Thank You Bryan I hope that counter embarrassed you a little bit good Troy uh this is hard for me because I normally don't lecture I teach seminars I never taught a lecture at the college and I've never given the lecture actually except the senior what does it call to the senior talk yeah that I did as Bryan did but beyond that I haven't done that so because normally I need a text between myself and and you and I don't have a text today since I haven't assigned one but I'll try to do what I can one of its the great contributions of 19th century novels is that while it's realistically tough and describes all the social constraints upon the self it the limits and distortions to which the self is subject by the society in which lives these novels often also strain to move beyond the Gibbins of constraint they struggle to suggest new human possibilities harmonies and reconciliations of opposites that are in effect utopian and character utopian not in the sense that they are humanly impossible but utopian in that they are not capable of being fully realized in the moment of history in which they were written or in our moment history these utopian moments which are rich with sensuous and communal plenitude are to be found for example in Mark Twain's Huck Finn on the raft underworld of peace love and near timelessness which stands in stark contrast to the horror violence and rigidity that Twain depicts on shore the river which Elliott TS Eliot called that great Brown God is in some sense transcendent a world on the river in which time death the slavery the opposition of black and white slave and free is suspended in which the word lazy for example becomes an active verb we laze it around in which a kind of Eden is recreated on Jackson's island the river also allows Twain to take human capacities and for a moment to refashion them to create a self in whom abstraction is negated and a kind of sensuous concreteness and direct capacity for feeling takes the place of the religious and social abstractions the rigid forms of civilization the forms of Calvinism for example that acted as a rationalization for slavery the forms of Honor that acted as a rationalization for horrendous feuds the the creation Huk of a sensory self of this other of a self largely grounded in sense perception and of a self grounded in feeling life is one of the utopian elements of of the novel you find it in language in all sorts of wonderful language that Twain gives to hug examples to anticipate something good is to make one's body's mouth water right to be terrified is to make one's body's hair fairly rise on one's neck to be afraid to be courageous is to have sand in your craw you know birds who digest food by sending it through sand in their craw to lose the will to turn gym in is to take the Tuck out of oneself you know I could tuck to be afraid is to have an experience of your heart jumps among your lungs to have conscience affect you is it would pinch you to be civilized is to be cramped and most important of all to encounter a devastating scene of human degradation is to feel sick that's what he does when he encounters murder slavery exploitation he says it made me feel sick as if you know the the body itself is responding to the moral universe one of the ways in which he shows this is that Huck at some point is in Jackson Island and he's describing you know he says it smelled late it smelled like smoked late and he says you know what I mean I it's sort of interesting right the the sort of taking of time the abstraction of time the the serial reification of time and and you translated into sense perception it smoked like I once had a freshman studies in which we read Huck Finn it was kind of really interesting the studies and I said okay let's have an experiment come to class when it smells on time let's let's carry this out they did and at first it was terrible chaos you know I mean it's almost impossible you know they straddle day and one they have to discard their watches of course they straggled in one after the other and it was just awful but in about three weeks and I was very patient at that point you have to be with first-year students we used to call them freshmen studies before the thought police took over and translated it into first-year studies because of gender issues so you know but after about three weeks they all came to class on time and that turned out to be the best freshman Studies class I ever took least some not bonded over our attempts to Kuerten hit our lives through our smell it was really quite wonderful anyway so bright it smelt late another example of a kind of grounding of consciousness and the sensibility in the body later on in the novel there is this horrible thing but bogs in the strange Arkansas town which is a sort of collapse of civilization where the loafers set dogs on fire were they sick dogs on nursing sours where they tried ten pails to dogs until they run themselves to death it's a kind of awful awful place and there's a drunk in their town called bogs and bogs is an unpleasant drunk he he sort of Sasa's people and insults them but he's harmless and you know he's just drunk and one day he's insulting Colonel Sherburn who's aristocrat in the town and Sherbourne has had enough of him and he says to boggs get you I've had enough of you if you're not out of town by one o'clock I'm gonna shoot you right and Bloggs daughter comes running in and she tries to get him out of there but he's weaving around he himself wants to leave but he's drunk and he doesn't have control of his of his actions and along comes Colonel Sherburn you know at one minute to one and I at this at the stroke of one o'clock he raises his pistols pistol from the bottom up he aims at it bogs and he shoots them to death it's one o'clock and I think Twain is suggesting that if you could smell time that wouldn't occur the code of honor is very much dependent upon the abstraction of time and that code of honor is responsible for so much death 20 I actually felt that Sir Walter Scott was responsible for the Civil War why because Scott you know sort of celebrated the old feudal chivalric order of and the southern aristocracy used that concept chivalric onerous justification for their part in the Civil War and for slavery so what he's saying is you know hey hey let's we need to soften ourselves in some way our sensibilities you know we need to undo you know our sort of obsession with cereal time as you know in that in that duel which which is expressed anyway that's that's Huck Finn now there's even more utopian stuff I said this is utopian because when he really doesn't in Huck is too so undo any epistemological capacity for abstraction that Huck even when he thinks in Calvinist terms and tries to turn Jim in can't do it because the concreteness of his sensory nerves feeling life gets in the way right it would have made him sick to turn Jim in even though he believes that by not turning jump Jim in he's going to go to hell and he says it won't but all right so I'll go to hell you know I mean there's there's the Calvinist ideology which is a matter of reason and of consciousness which he internalizes and believes in and then there's that sensory and feeling life which is so concrete and which subverts these ideas that are so dangerous in Twain's conception there's also something interesting not only does he smell time but but there's also something that goes on between Huck and mathematics Rock says I don't take no stock in mathematics and then that sounds interesting and then you realize at one point he finally tells you how old he is when he visits buck and he says buck was my age 13 or 14 huh why not 13/4 14 why 13 or 14 answer is that there's something wrong about being too precise about quantity about being too precise about what time it is about being too precise what age you're in and if you keep working those print those you know kind of standards of precision sooner or later you're going to say this is black this is white this is free this is slave and never show the twain meat right anyway it's it's really something again I want to just read a little bit I I didn't did I bring the plane with me maybe I didn't yes I did okay I just want to read one little thing in the Twain to give you a sense of again of the utopic quality that Twain works with this is there on the river it's the last scene on the river two or three days and nights went by I reckon I might say they swum by they slid along so quiet and smooth and lovely then that nice it's beautiful it's it's impossible but it's beautiful time you know becomes the river right you dip the abstraction of time into the sensuous flow of the river and then three or four days swim by they slid by so smooth so lovely it's a very different order of experiencing time and therefore it's a very different order of experience in general and then he goes on I mean that's that wonderful ID along the river where Jim and him are Jim and he are naked they put the raft down they talk about cosmology to each other I don't know if any of you remember but the question is why how how did the stars come about between him and Jim that's a big discussion on that River where time sort of stands still and is sensually mediated they also talk about how the Stars happened was made and Huck says it first oh they just happened sort of expressing a kind of scientific viewpoint that's an accident they took place and Jim says no the moon laid them and Huck thinks for a while and then if Jim goes on to say yeah that it laid them and falling stars are bad eggs hold that of the nest you know and then Huck thinks what and he says well that seemed to be so many of them I don't know that the moon could laid him but he said then I saw some tadpoles and I saw how many they were in the river and then I understood that the moon could have laid them so what's that about I mean that's that's about Jim you know having some understanding cosmologically of the universe is really not being a dead neutral object in which accident and chance rules but the universe being a place where love reigns where generation and reproduction takes place you know it's a kind of wonderful pantheistic animistic sense sense of light and that's possible only on the river where time is suspended for a moment right I mean that's that's one of the utopian moments in 19th century fiction there is so so good oh it's what you were raising your hand please do any time anybody wants to interrupt me I love being interrupted because my students will tell you another utopic moment I think that really is astonishing and is one of my favorites again where where a writer is going beyond the limits not of what is humanly possible but what is historically and socially possible is the great mowing scene in honor Karenin over 11 mows right remember some of you it's an extraordinary moment Levin went on mowing the author he experienced those moments of oblivion when his arms no longer seemed to swing the side but the site itself swung his whole body the side was so conscious and full of life and as if by magic regularly indefinitely without a thought being given to it the work accomplished itself of its own accord these were less moments you our hero Mo's with peasants and bringing in the Hema hey harvest in this scene and I get you a brief excerpt from it it's really goes on for about six pages all sorts of antimony z-- and opposites are reconciled in the scene of ecstatic sensuously and morally fulfilling communal unalienated labor a kind of working of oneself back to eating you know when when you're left eating your you were doing - what sweat to work by the sweat of your brow and to give child give rise to childbirth in pain well he does this thing here where Levin works with the peasants falls into the rhythm of the peasants mowing after struggling to do so and finally begins to experience the kind of state of the attitude the kind of working of once a bet way back to the prelapsarian Eden all the senses in this experience are purified and brought alive through the unalienated labor taj smell taste it's also an overcoming of class distinctions between master and peasant it's also an overcoming of the distinction between tools and the man where the scythe takes on the life of its own instead of being a dead object being wielded by live human being it's the loss of a paralyzing self-consciousness it's it's it's an ecstatic utopian experience know that those are two examples of the way 19th century novels periodically reach out and fashion I hate to say construct because that leads you to deconstruct and you know in critics who regard life is an erector set and so I won't do that so that's that's part of what the nineteenth-century novel does there's many things that seem by the way it's also the reconciliation of play and work of subject and object it's really quite astounding now I'm gonna go these are notes on the nineteenth-century novel okay there's there's no unity here but that's alright there's often no unity in 19th century novels so anything I'm just paralleling them 20 century novels are anti positivistic by that I mean if you take the issue of money for example the approach of economists in the 19th century and even today is to say what's money Lin it's a it's a medium of exchange right that's what it is it's neutral it's a medium of exchange but that's the 19th century novelist in 19th century fiction money is loaded in crime and punishment for example every exchange embodies all sorts of money of human issues and feelings humiliation power castration sexuality rage generosity love wreck in Dostoevsky money is experienced in the deepest and most intimate recesses of the self and that's what 19th century novels do they they take reified concepts like money they bring them alive and they expose the intense human feeling that is embodied in them in Dickens money Oh II you know Dickens life in which he was proletarian isin abandoned as a child made to work in a blacking warehouse made to you know support himself at the age of 10 so that money was a crucial part of his experience as a human being and when he does marvelous things when when David Copperfield for example runs away from his warehouse it's a very autobiographical novel he undertakes this epic journey to his aunt where he thinks will be rescued from this proletarian existence and as he worked he has no money with him so he sneaked takes pieces of clothing and he keeps pointing them in various pawnshops along the way and of course Dickens who isn't want bubbler sense of humour has him say I was afraid by the time I got to my aunt I'd be naked you know so he keeps discarding articles of clothing and he meets first a pawnbroker called dollar B and he offers dollar B his waistcoat he wants to pawn it and he says to me can you give me a fair price for it then thought it looks under he said no no no he says can't be buyer and seller to this and then he provokes him into you right are so bargaining right and and by that very small moment which has to do with money and it's very important uh Dickens you know articulates this sense of what the social world is really about it's impossible for a child to expect an adult to be fair and to be generous the marketplace has invaded everything everyone's consciousness is full of the marketplace the way it is today right I mean it's really intense and that's the world that David has to has to contend with a little bit later he meets a very drunk and crazy pawnbroker and the ones that tries to pawn his jacket and the guy gives he says all right there you agree on the price but the guy then retreats to his house and won't get dick won't give David the money except every every few minutes or every hour he comes out with another pence and gives pet David of Pence and then goes back and David has to wait and every time he comes out he says guru guru guru my heart my liver my lungs my linh it's you know it's a grotesque comic but very meaningful moment see what he's saying is money pence my heart my liver my lungs my limbs the the equation of money with body parts is you know there's part of what Dickens's suggesting is as being the significance of of money and wonderful things like I mean when he's poor and a child and the blacking in the wine warehouse the issue comes about you know he makes 14 pence I think 18 pence a week and he always falls short at the end of the week why because he loves a certain kind of pudding that's baked in the shops with big currants in them you know and he he tries to resist buying it because it means by the end of the week he won't have enough money left to feed himself right and so you get this you know wonderful hallucinatory reality of money as having so much to do with you know with oneself but that you get there all right these this is that's money not a characteristic of 19th century novels that deal with money and they refuse to reify money they insist upon making money human and alive and relevant nother characteristic of theirs is they all almost end badly they have bad endings prefatory endings sacrificing probability to a happy ending in different endings not balanced you know you wonder why these are great novels why did they enemy was so crappy you know I really do they're just bad I mean if you think of the ending of of Huck Finn which is about Tom and Huck freeing a free man that's awful if you think of the way Austin ends her novels you know suddenly appearing in the eye form as a novelist and and tying everything up and resolving things and and almost a lackadaisical and different and playful way destroying the illusion of reality by ending you begin to wonder what the hell is going on that the novels are so bad well you know I think it really has to do with the fact that 19th century novels are only in a limited way only in a limited the 19th century novelist only in a limited way conceives of himself as a narrator as creating narratives with the beginning of middle and an end that's not that deepest level of his to e ology of his purpose it's rather the creation of a world that the 19th century novelist is about now a world unlike a narrative doesn't end right it just doesn't there were hopefully right so the ending suffered because a best division in the novelist between wanting to write a narrative who the beginning middle then wanna create an endless world and that that conflict results in in strange endings they don't want to end in short it's bright they they really don't the novelist who really took this to its furthest logical conclusion was of course Bozak right and Vasa wrote maybe a hundred ten novels he he wrote them late at night smoking a hookah with God knows what and drinking pots and pots of Turkish coffee you know sort of caffeine sludge all sorts of interesting things happen to him which we'll talk about maybe a little bit later one of the things that happened was his character spoke back to him all right he tried to get him to go one way and you know to do so and so and they said none you can't do this now you've probably all had that experience as writers haven't you will you create something and suddenly the creation runs away from you and runs away with you and takes on the life of its own and you say huh I didn't mean this I didn't plan this it just happened right well that's what keeps happening to Balzac right there's Carrie then that's what it means you know because of the coffee and the hookah he suffered from auditory hallucinations so he really they talk back to him did you see if he died of coffee poisoning he really did I tracked what novelist died of one of the things I do it's fun Dickens for exemple died of reading out loud what happened was that after in his 50s he was suffering from heart disease based upon kidney failure and his doctors he used to give readings out loud right and he was a great actor he directed amateur theatricals he was a fantastic actor but in these readings he tended always to choose works you know scenes from his own work and he tended to choose the most violent possible series for example in Oliver Twist of Sykes beating the prostitute Nancy to death was a huge club right that something about that turned him on so here we go right and he couldn't stop giving these readings because they made him a lot of money and given his early experiences as a child he never had money enough so his doctor said please don't do this your blood pressure goes up you have a heart condition and he couldn't stop himself the last reading he gave was of Oliver Twist from that scene now you have to realize what goes on in these readings 2,000 people show up they have fire men all over the place because they're using gas lights to light the stage right strong man weak women faint it's really a gigantic spectacle well he reads from that and then he goes home the next day he has a hard to take and he dies so I think that's what I mean by the dives are reading out loud right it's it's a joke but it's it's partly true Tolstoy died from running away to a railroad station which if you know either compression is quite ironic what yes absolutely because what he does by killing honor that I hate most of all of my students are you know when he did that he destroyed the parallel structure and the ending is flaccid because it doesn't have the tension between Levin and Anna and they longer yes yeah you know the marriage epic you know and then it's nice something she says that things go a little better with you and me is due to certain on historic acts like the ones that Dorothea makes it's it's nice but it isn't really you know it isn't strong it's just nice it's sort of trails off yes why where is unwilling what's what prevents them from using that the service of a meaningful ending that might be painful right horribly disgusting exactly well I say one of my explanations is that you know that if you try to end within the pages of a novel you're betraying in some sense the world creation that you had intended I think that there's a kind of explanation for why they're so bad you you really can't take a world and put it in the pages of a novel the world leaks out of those boundaries basically well I don't think some differences between oh but that had to do with their audience in part you have to understand this is very curious about 19th century novels and very important that their audience is not was not split between highbrow and lowbrow they had a mass audience right Tolstoy Dostoevsky Dickens trollope George Eliot wrote for thousands and thousands of people they wrote serial publication all of them right in which which result I'm sorry I'm beginning to skits off but what I'm doing is I'm free associating and instead of using following my text I hope you don't mind that's what what they wrote for serial publication that led to great intimacy with their audience for example Dickens issued his publications three chapters bound in a green cardboard cover with a litter graph on the FAQ with a sort of ad for liver pills on the other side and you know he'd issued these every month for the most part for most of his novels and the paterfamilias the Fatherhood family would run to the newsstand buy it and come home and read it to the children and to his wife right so you get this very intimate already thing going on where your audience is is there all the time while the novel is actually being produced in the certain way every once awhile that ran into trouble for example he grow the terrible novel called Oh Deary city shop with his lugubrious and the mental and it's really awful and it has a heroine at Little Nell and little net has a gambling grandfather she's an orphan and she has only grant for the left-knee Gamble's away all her money and she's being chased by a dwarf like white quilt of dwarf who eats hot shrimp with the tails on and who drinks burning rum without gagging right and he's chasing her and so he sort of you know he's sort of body and she's spirit I mean it's awful it's it's this is one really lousy Christian novel it's terrible and so he he writes this in serial publication and of course his audience gradually as the months wear on begins to become aware that Little Nell is going to die that that's part of the novel or a sacrifice right and it can't stand it so they start haunting him in the streets and they say they catch him they said dick and stickers please don't kill him now so he actually what point was recruited to put on a disguise a beard so he could avoid them and of course he killed little now that's what he had intended to do it was damned if his audience was gonna stop him from doing it but that kind of intimacy also led some of these writers to be unwilling to alienate their audience by really putting in deeply pessimistic endings so you know like the two endings and great expectations you know one ending is he never sees his taller again as he shouldn't she's a and and he's bending resonated in so many ways right he really has and and so there's a really tragic sad ending so he Dickens reads it out loud to a friend of his fostering force this is your audience won't stand it you've got to do something here so these I ride he says and he puts in a line saying there was no shadow of a parting any longer between me and Estella you know a sock to your audience every there's just that kind of there's a sort of strange wonderful informality about these novels novelists are in it anyway they wrote for a man audience and I think this is very important they were not alienated modernist writers right why is it so important because that mass audience you know they could do two things simultaneously good runs very subtle complex novels that require lit crits to analyze them on the one hand and on the other hand these same novels were wholly accessible to their public right and that that's an amazing thing which can't be repeated in the modern world you don't have great literature that's totally accessible to a public your right is trying to protect themselves as they create from the vulgarity of their audience but not not not in the 19th century and that have all sorts of implications I mean one of them was that writers when they wrote Tolstoy Dickens misty St George Eliot really felt a great sense of self-confidence and empowerment they felt that they could shape their audience they could they could have a moral impact on a world they didn't feel helpless impotent marginalized what their desktop no it applies no one here right it's something and somebody else oh yeah oh yeah yeah yeah well Elliot had had had it she broke she had huge audience she was next to I think next to Dickens the most popular writer that that wrote in the 19th century and she also wrote for serial publication yeah she did she suffered from another issue which is that you know she lived with her paramour right who couldn't get a divorce and so Queen Victoria loved her work was wonderful moral everything and she wanted desperately to meet her but her advisors the Queen says no you can't do that Elliott is living in sin right come in one of her issues she's really again free association just for a moment it's really quite wonderful she started off being anonymous as a writer right that George Elia is really Mary Ann Evans and she used the pseudonym George Eliot partly because the first thing she wrote seems some clerical light contained a thinly veiled attack on her father and her brother and she didn't want her father and brother to know that she had done that so she used the pseudonym partly because she wanted her novels to be taken seriously and she felt that men would be taken more seriously than women so you know and she kept this fiction up for a long time you know for a couple of years after she'd become very famous until a crazy clergyman in the provinces realizing that nobody knew who George Eliot was decided to take credit for writing the book and at that point she said no we've gotta let people know now all this time she had submitted before people knew she admitted she had submitted a story to Dickens journal and Dickens read it once and said oh it's a woman it's very clear in Dickens got it straight anyway the sense of relation to a public a feeling of being able to mold and to shape human life rather than just to be a spectator rather to a commentator on it I think it's quite central to to 19th century novels it accounts among other things you know for the way they're not about art as so many novels today are they're not about the active novel writing they're not their heroes are not artists if you read a nineteenth-century novel you don't write a novel the portrait of the artist as a young man right if you do David Copperfield who some excuse me if you do David Copperfield who's a who's an artist a writer ultimately when it grows up it's not central to his experience and his experience is the experience that he has in common with ordinary people and these novels are about typical normative ordinary experience and part of that is because of the relationship of the writer to his audience also that sense of confidence makes itself manifest in these incredible narrative voices and to be to go to the craft of writing for a moment right these 1970 grace Paley you know you know greatest short story writer in America's ever produced to taught here and started the writing program here and it was the most arrested woman in the history of the United States peace demonstrations have reached the point where she had no state left where she wasn't on probation so she couldn't demonstrate any wonder because she'd be jailed you know she was quite wonderful she is to say to students of my pleas to her students all right please don't take it yes of course nothing against me we were friends and political allies but don't make a discos because of 19th century novelists have such powerful narrative voices you'll lose the capacity to develop your own narrative voice if you read too much of that stuff she used to say you know it's sort of a twisty but I'm very strong and they are very powerful listen to George Eliot for a moment right listen to the voice and they think nothing of breaking out in the middle of a novel and talking in their own voice and being prophetic and teaching an audience and preaching to an audience that there's nothing the matter with that from their standpoint you don't have to constantly sustain the illusion that this isn't being written and that there's no audience in a 19th century novel I think but listen to her for a moment if I can find it may not be the case yeah here's Eliot her her book is is written with a view to the death of God her disbelief any longer in the Christian religion and it's an attempt in part to find a substitute for Christian morality or for Christian ritual it's an attempt to find she tries to found something called the religion of empathy a naturalistic religion based upon the human capacity for empathy rather than a supernatural religion and you hear it their voice this way we are all of us born into moral stupidity and take the world as an utter to feed our supreme selves Dorothy had early begun Dorothy had early begun to emerge from that stupidity to conceive with that distinctness which is no longer reflection but feeling an idea wrought back to the distinctness of sense like the solidity of objects that he had an equivalent center of self whence the lights and shadows must always fall with a certain difference the rest the rest of is a little complicated I understand when you hear it you have to look at it in print but we are all of us born into moral stupidity and take the world as an other to feed our supreme selves Wow right listen to that legislative character of that voice listen to that voice saying you know speaking in terms of human universals and the we is so central very often to great 19th century novel it's the ability to say we and we means the writer the characters and the audience we they reach oles for the human Universal and try to achieve it without losing a sense of of concreteness that's one again the authority of nineteenth-century voices buzick you know I told you about those are great the god of coffee poisoning yeah okay and who who had people talk back to him right we'll deal with that in a moment how am i doing okay I got time right how much time do I have 20 minutes thank you going back to those are you here Bob socks voice and cousin bed if I can find it which I probably can yeah the confidence that they often have in simply summing up a historical moment in getting at its essence you hear him speaking of the bourgeois monarchy of Louis Philippe and cousin Bette oops sorry this yeah don't imagine that it's King louis-philippe were ruled by he knows as we all do that above the Charter that is the Constitution there stands the holy venerable solid Theodore gracious beautiful noble ever young almighty Frank right I mean just listen to them summarize you know the extent to which the Frank has become a kind of God in the world of Louis Philippe where is the baza today I mean if you think of what's happened in the last two years Ponzi schemes the collapse of the financial industry you know the total limitless list of human greed in this economy where is the great writer who taehu tackles that head-on you know it's it's it's the great theme of our time today been done of course Dickens did in Little Dorrit with a fancier myrtle you know he he des loves doing that with names you know but man is right in France's so he calls his financier Myrtle George we're not sure insisted that it was the most radical socialist novel that he knew that's ridiculous it's not but it's a brilliant dealing with a Ponzi scheme financier and trollop did it also with the way we live now in his novel so the these novels did that kind of thing and and it's sad that no great writer is tackling seems that big and that's central to our common historical experience at the moment and I wish it would start happening again because you know why because if we don't we leave an analysis of our common human condition to the goddamn social scientists and that's not a good idea it really is not I mean that their way of knowing is fractured is specialized and it's reified I mean you you what you need is the novel what you need is the human imagination bringing the truth alive you don't need to absorb the truth in a dead reified for you need the truth you know expressed in a in a grandiose act of the imagination in which which these writers were able to do that's that's the way the truth is really disclosed you know oh well sorry I got excited I forgot I I don't know well who do you think that was close to sunder step yeah I like Philip Roth very much but you know and and and he does have a large scope pretty long huh not only do you see political reactionary or the worst order okay yeah you should forget about politics he's right because one of the characteristics of you know that of 19th century realism is that the work consistently transcends the politics yeah maybe he does he dance a large ambition and then they write the finest we're cold humanly no no okay look you know I I don't know modern literature that world I think I talk as if I do all of our kids I love you know love in the Time of Cholera is one of my favorite books yeah anyway okay other things about 19th century novels bad endings there fat Karen Lawrence actually taught a course apparently in California called fat books in which every book that she read had to be a thousand pages are longer and these books are very fat you know if you ask why there are people who actually believe they were paid by the word I had somebody in my class say that recently and he had to read David Copperfield which is by the thousand pages and he was getting very tired and so it's a sort of nasty comment yeah that's long you know it's long because he was paid by the word no he wasn't paid by the word but the length is an issue and it's it's interesting to speculate what it might be about if you think of 19th century novels as being and this has to do with a very interesting conversation that Brian I once had at dinner in the president's house were I to be a little bit provocative said no great novelist can be a vegetarian my daughter was like stew I was trying to say something I should have said no great 19th century novelist could be a vegetarian that would have been a little better and what I meant by that is that kind of fastidiousness that says I will take this into myself I will not take this into myself right I will let this come at myself I will not let that come out of myself you can go both ways they speak that kind of fastidiousness is really foreign to the 19th century novelist who is a vigorous he swallows everything everything he possibly can get ahold of the body class sex psychology nature money anything he close and get ahold of his craving for the greatest fullness of reality is so profound right that that he swallows everything and then recreates the world now if you are God and it was your job to recreate the world and you had to leave include all these levels of reality we need a lot of time and space in order to do it even as God well that's why these not also so fast because they are so incredibly inclusive because they're trying to create a world and the world is complex and has so many different levels of reality and they're not willing to compromise on that world creation so they become very long there's a consequence is that Karen oh hi Karen how are you no cabbage what no no that's actually Joyce Joyce may well be a 19th century novelist in disguise exactly no there was a real continuity there as there is for example in portrait of the artist as a young men it's inconceivable without David Copperfield I mean it's really built on the the building's gone on and the treatment of childhood by David Copperfield and in so many ways anyway wait and now she's strung me loop right that's art I carried it's fun it was your course it was Karen's course that was fat books of course yeah okay now what else about 19th century novels yeah many many things they're full of self contradiction they're not coherent and orderly in their moral vision right there something goes on there that is really wonderful a kind of conflict between their ideology very often and a sense of reality for example Brothers Karamazov is the stielv Sookie's most Christian novel right I mean it really is celebrates Christian mysticism right in the middle of it okay his left wing and electroni intellectuals of his time attacked him for writing a politically tendentious Marley tendentious novel he responded huh I don't know how you say not in Russian but he did he said ha he said when would you ever find in their work a challenge so great and so profound to the goodness of God as that which I have put into Egon Karamazov Smouse you know even as the Atheist brother of the three in Brothers Karamazov and what he does he narrates a series of episodes of the torture of children they're horrible and they're so profound that he even shakes the christian mystic Alyosha at one point so he said what would you do to this man who bayonetted a child in the presence of his mother and Alyosha starts shaking and says shoot him he says tell me or show your faith is shaken but you know the CFCA really did that he really challenged himself with the opposite of what his viewpoint was and you can feel a dusty etske novel trembling with conflict between faith and doubt between faith and doubt between an insistence upon having a positive sublime view of life and a sense of reality that fundamentally contradicts it or if you take for example the two voices of Bleak House I mean read something of Aron esto you once told me you take the two voices that's an in-joke if you take the two voices of Bleak House right that's very interesting it's the only nineteenth-century novel that has two narrative voices and high voice again crafted fiction and a third person voice drunk but they are profoundly different Esther who is the eye voice of the novel is an orphan a modest humble self delimiting girl young girl who always serves other people whose with their welfare right and who narrates in such a ways take the count of the complexity of every human being that's Esther right and Esther writes the beautiful English sentence with a subject a verb and a predicate and never departs from that and Esther says to her audience she says I'm not very smart maybe this is about self deprecating I'm not very smart but my understanding quickens whenever I love someone what she's saying in effect is that I'm not going to narrate or understand the world through anger even though I've been abandoned and orphaned I can only understand and narrate the world through affection and love that's one narrator the second accurate here is absolutely wild the third person who's not a character in the novel as Esther is right but who's wild whose rage against the world is profound and who to whom Dickens gives every weapon of the imaginative novelist possible to express that rage against the world even up to the point where he wants to indict the political authorities for their failure to ameliorate the human condition where he he starts calling them coudl doodle doodle and foodle and then not being content with this a little bit later called some chisel missile pizzle and fizzle right I mean the kind of you know satire that totally dehumanizes them in order to express the rage that that third person feels now these two voices are never reconciled that is there's no resolution of the fundamental conflict by which they see the world none the novel ends with both voices still very real and very contradictory to each other and that's one of the characteristics of of 19th century fiction James Henry James called 19th century novels that loose baggy monster which is very nice I think it's partly true I don't know about the monster part but certainly well if you take the length but certainly loose and Baggies is often the case and part of that loose bagginess involves you know when you read a 90 century novels you can sometimes feel or at least I do this is a little bit crazy so don't don't listen to this too much you see you can sometimes feel that there the novels are still alive that you know if life means experiencing conflict and contradiction and I think it does to a very large extent these novels are still alive they you can feel the process by which they're written and you can feel the conflicts that are still in them you know that that are still expressed in them in all sorts of ways those are just two examples of radical conflict and radical contradiction in 19th century novels they're Allegiant I mean if you look at that Victorian novel of Jane Austen's Mansfield Park she writes this beautiful novel this wonderfully optimistic playful novel that integrates all human capacities meta Pride and Prejudice right I mean everyone who loves it partly because everything turns out wonderfully and you know and and there's no contradiction there in there you know the human condition that's integrated and is happy and then she writes Mansfield Park and which is humorless and Victorian and it's in its character but even there you see the conflict you see a character you see two characters Mary and Henry Crawford both of whom are denigrated in some way both of whom contain elements of Elizabeth in Pride and Prejudice they have a sense of humor they're sexy they're playful right and and these characters are ultimately disposed at the end of the novel they don't get to marry the person they want to marry which is terrible and we lost in universe if you can't do that you can't do anything okay but throughout the course of the novel you begin to realize that although she is condemning them because it's a Victorian novel she's also in some way articulating who they are with the kind of vivacity life likeness and animation that she gets - none of the other characters and that tells you that he still hasn't resolved you know the victory she still hasn't fully absorbed the Victorian denial of the kind of paganism of Pride and Prejudice that the conflict still exists and you know it still exists because the next novel Emma comes back to life again and again is of living character with human you know intense human capacities so how much time have I got left I think I'm getting tired but what else do I want to say about 19th century novels I might say one or two things quickly more self-contradiction yeah form and content and very interesting relation between form and content a kind of punishment for example right there is no novel which depicts the human condition as worse than crime and punishment human beings are impoverished they suffer there is no human community there is no safety net does he actually in spite of being Christian empties the world of priests and of confessional so you have to go to a pub in a drunken state in order to confess to someone it's a horrible horrible world to have you nation of isolation of suffering right but then you look at his form and you realize that dusty upski writes in such a way that he's always doubling or tripling his characters either fully as a whole his doubles two triples Raskolnikov and mamela dove he triples the prostitute Sonja and Ruskin decodes sister dunia and or he takes parts of characters and he mirrors them in other characters that's the way he writes and then you realize isn't this interesting on the one has depicting a world in which there is no sense of community and on the other hand formally he is describing articulating a plot and characters in a narrative who constantly reflect each other so that at the level of foreign he's kind of creating a human community that totally contradicts and redeems the horrible world that he is describing substantively and you often get this kind of thing and in dusty escape so that's another thing laughter I got five minutes okay laughter there is no more hilarious world in the world of the 19th century there the phenomenology of laughter of different kinds of laughter that you can find in 19th century novels is extraordinary you know from the the wonderful cutting ironies of Jane Austen to things like George Eliot Wright's young men's consciousness is chiefly made up of their wishes the narcissism of young men I mean that's incredibly funny and I you know it's all over the place and including the belly laughs that you can get from Twain and certainly from Melville you know Melville reasons to write a mighty book you have to have a mighty theme he says you cannot write a mighty book about a mouse choosing a whale and in place of a mouse or you know it's sort of being ironic about it's wonderful being ironic about his own megalomania as a writer be laughing about it he says all he says give me a pen get me Mount Vesuvius as crater as an inkwell there's a Hubert throughout a tremendous amount of humor and and the humor is liberating I mean the humor allows all sorts of human Horrors to be depicted without dissolving us into an intolerable experience and it allows certain things to be expressed which couldn't be expressed you know without without the humor I'll give you a couple of examples and I'll stop there okay is that all right yeah a couple of examples Dickens the button the button popping in Dickens do any of you remember it's really quite wonderful here's David his mother is marrying the mean mr. murdstone who's going to ultimately a killer but not not literally murderer the killer through deprivation of love and through his archness and coldness and David is suffering this terrible oedipal defeat right and he turns to Peggotty the servant right the buxom servant let me see if I confront it and she hugs him a lot and he relies on these hugs to to sort of compensate him for the coldness of the world and the deprivation that's taking place so you hear this Peggotty opening her arms why took my curly head within them and gave it a good squeeze I know it was a good squeeze because being very plump whenever she made any little exertion after she was dressed some of the buttons on the back of her gown flew off and I recollect to bursting to the opposite side of the parlor while she was hugging me it's funny but it's it later elaborates it all over the place I mean he does things like this time he's being sent away so she's trying to be very very loving towards him you know really right so she hugs him really fiercely all her buttons flew off and you know what happens to know your buttons slow right you're left naked and this generous Earth Mother body is available to him for nurture and for nourishment so he gets serious in the middle of the comedy there's a kind of wonderful seriousness of it then he even takes it further he loves to elaborate these comic moments until they really go wild in their elaboration he says he's being sent away and Peggotty hugs him and some buttons roll off and he says I wonder if I were being sent away I could follow the trail back to my home of those buttons Hansel and Gretel right whoo-hoo yes exactly that's what you feel suddenly the comedy turns into exquisite poignancy of feeling right and and without that comedy the the poignancy of feeling would be lugubrious it would be sentimental and it would be terrible but with that comedy it can be it could be quite wonderful yeah so you get the button popping and then again with Peggotty I think some of the best scenes in the novel have to do with Peggotty of most comic ones he has bitten mr. murdstone's hand the evil stepfather and he's punished by being locked in his room and later he's going to be sent away to school separated from his mother so he's there right there's a keyhole in the door and Peggotty comes and she wants to communicate to him she can't hug him she can't pop by buttons at him you know but but she wants to express her love for him so they have a conversation through the keyhole this is hilarious I was obliged together to repeat it what she said for she spoke it the first time quite down my throat why and consequences much forgetting to take my mouth away from the keyhole and put my ear there and though her words tickled me a good deal I didn't hear them it's hilarious but you can feel something really important underneath the hilarity right I mean the sax is being cut off from all sources of nourishment and nurture and also the words that come through the keyhole as breath you know the word was made flesh its breath of life this is a lot of stuff going on there Pecha be fitted her mouth close to the keyhole and delivered these words through it with as much this is wonderful with as much feeling as a keyhole has ever been the medium of communication then finally you know he can't like go he's keeps on doing it together the campus hanging he fell to kissing the keyhole as she couldn't kiss me we both of us could disappear with the greatest affection I patted it with my head I recollect as if it had been her honest face okay I think I'll stop here thank you |
english_literature_lectures | English_Literature_at_Lancaster_Universitymp4.txt | what I want to do is just start by characterizing the English literature course here at Lancaster and I'm going to do it by um talking to you a little bit about part one which is what we call our first year one of the things I think that makes English literature at Lancaster distinctive is that we offer an incredibly solid good strong ground founding in English literature that can compete with any Department in the country but at the same time we introduce a number of unusual approaches to literature creative approaches to literature state-of-the-art courses that are much more Cutting Edge and I think the combination is something that greatly appeals to our students and keeps us a bit fresh and Alive as well as um very solid in terms of what we have to offer you so let me illustrate that Now by talking about your first year if you were to come here and do English literature you would get solid grounding through lectures and seminars in the English literary tradition you get a broad historical sweep you get introduced to literary criticism Theory movements genres and we train you very solidly in research skills reading skills writing skills and exam skills you have all these skills already but we need to up the anti of it at University level so you will get a fantastic base and training and you'll have the knowledge and the range that you need to be a respected literature student but at the same time we try and open up ways of approaching literature beyond the things you've probably um done before so we don't just limit uh literature to plot theme character imagery bit of historical context we try and get you to look at texts in new ways from angles you might not have approached them before so I'm going to give you some examples in your first year you'll study Renaissance sonnets and one of the things we ask you to do besides read them and read criticism about them and learn about their um invention and their historical context is we ask you to actually write a sonnet and there is something about writing a sonnet that gives you an understanding of its form and of its structure that you can't possibly get just from reading it in a detached way you'll you'll know about the meter and the Rhythm and the movements and what happens too this is the sort of um Fringe benefit is that students tend to produce some of them absolutely fantastic sonnets hilariously funny sonnets poyant sonnets and um it's really a high point of the first term here um you not marked on them so you can really be free when we get to Shakespeare the same thing holds we we teach Hamlet and we teach you all the sort of literary approaches we have you look at criticism but the other thing we do is we introduce you to what we call performative criticism which is Hamlet interpreted through performance in the form of two films so when we come to look at film performative in criticism of Hamlet we're not simply looking at how a character might say the lines or even things like costume or production design or what historical period it's in we go beyond that we look at things like how music or camera angle or light W in open up a text we get you to try and think Beyond heavyduty solid semantics and into some more capacious and multifaceted intermedial interdisciplinary approaches to Hamlet and we believe that that gives you insights that you wouldn't normally have just from thinking uh about a more limited array of of aspects another example is when we come to talk about the Victorian novel our our theme is the Fallen woman and we set Victorian and Fallen women in dialogue with a Neo Victorian novel John F's French left tenants woman this was written in the 1960s and in the 60s people were reacting against victorianism they thought it as a very repressed kind of period and there is a dialogue that goes on between Victorian texts and Neo Victorian text that again we feel opens up new ways of approaching literature gets you to think about things in different contexts and stretches your mind and makes you a better scholar of English in a similar vein poetry in place well you probably noticed we're right on the edge of the uh Lake District and um there have been essays written about what if Wordsworth had lived in the jungle would he have written the same kind of poems that he wrote in the Lake District there's a consideration of the importance of location and place on the interpretation and the construction and the writing and the reading of English literature we also have Scholars who are very interested in politics and they look at ways in which poetry intersects with political discourse again opening up a new angle through which to read poetry when we come to our section on fairy tales we ask you to look not only at traditional mythology and folklore but also at feminist takes on Fairy Tales um things like Pretty Woman as an adaptation of the Cinderella myth and how that's reworked in another context um again just trying to get new angles new modes of interpretation and then we have a number of events throughout the year where creative writers and authors come to Lancaster and read their work and talk about the process of creating literature we have um events in town as well that we send our students to so again there's a a new way of looking at literature Through The Eyes of writers authors and so on they get you can ask them questions they come and speak to our first years and so on you probably want to know about course structure now um and you've probably heard about the three subjects that you take in your first year some of you are going known some of you going I want to just explain this to you um it's something Lancaster's done from when it began when you when you come to Lancaster you will study your major subject if you have a combined major you'll study those two subjects but you're asked to take a third subject as well there are three main reasons that we have this three-part one structure the first is it gives you an increased number of options it defers your decision making and allows the flexibility and when I went to University I knew that I only wanted to study one subject it was going to be psychology and I didn't want to study anything else and I ended up within a term changing to journalism a year two terms later I changed to mass communication and a year later I added theater um then I did an MA in film studies and a PhD in English so I thought that I knew exactly what I wanted to do but what we find is at the end of our first year 28% of our students change the degree scheme that they came in on either they drop a combined major or add something now you still may think you're the person who knows you only want to do English and absolutely nothing else well there's a way um that the three-part subject will greatly enrich your English liter studies we don't study English literature anymore as if it it were the Holy Bible a sacred text studied for its own sake for its wisdom and Universal truths you can't study English literature University level unless you're engaging other disciplines like history historical context psychology understanding of psychological theories um other kinds of languages and media um enrich it philosophy religious studies all of these things enrich the understanding and and some of them are indispensable um to the knowledge of English Lit if you don't really want to be that kind of scholar you can still do English language English lit and creative writing you can use creative writing to make you a better reader of English and George will explain how that happens um because a lot of what you do in creative writing is reading and critiquing other people's writing um and you can use English language to give you a strong or Linguistics to give you a stronger Foundation our degree scheme allows you to take risks to try things you might not be that good at because you only have to get a pass Mark so you can try something that you might never have dared to try at a level in case it messed up your life and actually do it and enjoy it here there's a list in the prospectus there about 50 options I think it's just a huge maybe it's 40 you've got lots of things to choose from so this is what it would look like um part two is what we call the second and third years so in your first year you'd take three subjects and um you'd have your assess you got to pass that and then in the second year you've got a number of options obviously if it says four non- majors that's it you can't continue but many subjects you can continue with so you can end up just doing one subject or a combined major the only thing you can't do is all three so these are just sort of no-brainers like one two 3 1 and two 1 and 3 2 and three okay more specifically here's a example of what your second and third years might look like here I've given you a feel for the first year by illustrating it at the beginning of the talk these are examples of courses that you could take um these are courses that give you solid coverage and grounding and yet also have Innovative interesting approaches within them okay so the two that are compulsory are theory and practice and dissertation which is bit of an oxymoron because that's a subject of your own choice so it's compulsory to make your own choice the other thing that we have in our third year and this is where you get a feeling for and get engaged and involved with faculty research we have these specialized half units they go for half a year each they're small groups with a faculty member on special topics other than these big broad historical sweep so we have 21st century fiction 21st century Theory British and American Crime stories science fiction in literature and film contemporary fiction and critical theory literature in the visual art s women in American Poetry where do poems come from the Byron Shelly Circle that's kind of the rra pack of the Romantic Era rusin on Art architecture and Society Victorian Gothic other victorians on the boundaries of nonsense film adaptations Hollywood 1939 children in horror fiction literatures of identity African literature the 20th century Indian novel early modern Outlaws that's like Robin Hood ceremony in Shakespeare reforming the body in Elizabeth e Elizabeth in England Utopia colonialism and the new world and then there's also a school's volunteering module I don't know how many of you want to teach but we give you experience in a school where you're mentored you get practice and it exponentially aids your applications to PGC programs by having that experience so that will be part of your course credit I want to stress that not every half unit is offered every year there changes but um there's a great s setion each year there's lots more information about the books you'd be reading the assessments you'd be doing week by week what would it be like to be a student here and you can find them do you see on the far right hand side this is our page for our current students the part one handbook tells you everything you need to know about your first year and the part two handbook tells you everything you need to know about your second and third year so you can download these and read them and get much more of a feel for the course |
english_literature_lectures | Frieze_Lecture_A_Postcolonial_Love_Affair_with_Charles_Dickens_Part_4.txt | as a postcolonial study scholar i see immense importance of studying and teaching child decans both in my native country bangladesh as well as the us today um as you can see my title of my powerpoint presentation or my lecture today is a post on your love affair with charles dickens now how that love affair came to being i will tell you the story but before that i kind of want to give you a little bit of an idea of what is post-colonialism or what is post-colonial studies so doesn't oh it's not working how do i go to next try the spacebar right down the bottom yeah see there's a little white arrow down there all right i don't know if you can read this but this is something if you go to a dictionary or any kind of uh literary theory uh dictionary again it will give you uh this definition here um and i'm gonna kind of read a little bit uh just to give you a sense of what the post-colonialism or postcolonial studies scholars do here in a literal sense postcolonial is that which has been preceded by colonization the second college edition of the american heritage dictionary defines it as of relating to or being the time following the establishment of independence in colony in practice however the term is used much more loosely and that's kind of the way i've designed my powerpoint also um well the notative definition suggests otherwise it is not only the period after the departure of the imperial powers that concerns those in the field but that before independence as well so when i move into my uh last part of my presentation um i'll try to show you how that happens that even though postcolonial studies is supposed to be after colonization after decolonization and after the colony is gaining independence but many of the time it's about the colonial period itself and i focus on australia and how you know dickens even though he didn't contribute to creating sort of a colonial atmosphere there but people read it during the colonial period uh all of the texts that dickens wrote and i'll talk about that um in my um at the last part of my presentation um so the next thing is i would like to introduce you to a couple of the scholars of this field um they're highly renowned and you know their research started anywhere from 1975 to the recent period um i did not specify the period but on your right hand corner is edward syed you might have heard about him i mean he died recently as a palestinian writer and he came to the u.s many times did lectures so he's a very renowned figure um on your um right-hand corner here uh is um arjun apodurai and he's a caribbean post-communist studies a scholar right at this moment and i absolutely love him because he's kind of moving from postcolonialism to what is known as hybridity now another word that comes in association with post-colonialism is hybridity the hybrid nature and often you know even if you look at immigrant communities in the u.s south asian community you can easily apply that term here where you know we are our children are married in different racial groups and we there's a lot of many immigrants that come together living in one diaspora or several diamonds for us on your on the bottom right hand corner is edward blisson again he is a sort of a french british caribbean african person stella on your left is the famous french fanon the wretched of the earth which is one of the biggest book of course in this paddy beard and the middle again one of my favorite gayatri spivak and ashi is the indian postcolonial scholar so you can see a kind of a wide range of variety of scholars that work on specific areas of the world here and they have all come to give sort of different analysis of what post-colonial studies mean based on the part of the world that they're coming from or the cultural background that they themselves have embraced yes i am today technology is not helping i might need somebody else's help i'm sorry about it i'll go ahead okay thank you just the spacebar yeah just the spacebar which one thank you very much um going back to my presentation i divided the presentation into three parts today in my first part i would like to talk about my relationship with dickens of course and how i grew up with him uh second part i would like to look at how dickens is taught and is rivered in south asia especially in bangladesh and india um and of course one of the other things that how is dickens is still influencing uh sort of the educational institution in bangladesh in india even after you know so long i think almost 56 years of the ending of our colonized colonial period uh 65 years actually um so that would be my second part and the third one as i said i would like to look at the reference to australia in charles dickens's great expectation and we'll give you some sort of an idea about how how dickens was read how popular you are he was in 1861 and 1862 uh in south um in southern australia and that would be the way uh dickens and i go back a long way as a child growing up in bangladesh of course i referred him much of my love for british literature comes from my father who is a history professor my father completed his master's degree from glasgow university in scotland in 1975 and then he did his phd in 1985 from melbourne university australia so as you see my first connection is with my father's visit to london i've never been to london i don't know what london looks like only two pictures maybe augustana would take me there yes and uh the second relationship with again dickens and subject and country is of course australia because i've spent three years there i've had education there so when my father came back from england he used to tell us stories of all of these british writers and one of them was charles dickens and i picked it up because as a history professor he used to talk about a tale of two cities uh because you know he could get to talk about london he could talk about paris he could talk about french revolution so he would tell me the stories and then i would ask my father okay who's the writer of course you know nothing kind of stayed with me i was very young and he said charles dickens and i would ask him to tell me more stories about charles dickens and he didn't have enough not he didn't read anymore and also he spent a lot of time in the university that he taught his priority was the institution itself and also his students so we got to see very little of him so whatever time we got with him we would ask him to tell us stories especially of london and i had the vision that um glasgow was a city covered in glass and that's why it's called glasgow okay of course realistic not at all so and those would be the stories uh that i would love to hear and going back to a tale of two cities i didn't understand what he meant by french revolution i didn't care at all but what i wanted to know about was sydney cotton because i felt that he was so brave i mean i didn't understand what love meant also but i thought that he wanted to give life for somebody else and that was a lesson for me and i absolutely uh loved that uh loved that idea um and from then on you know i i wanted to read more but i couldn't read a single line of english when i was in that age of course we are taught english in our country as a second language but i couldn't read any of the tests so for that moment of period i would i had to be happy just by listening to a tale of two cities from my dad again and again and again um i'm going to take a little bit of a break and actually show you what kind of libraries do exist in bangladesh because this is what i actually grew up with i mean now with my children i walk down the street and there is the moline library and my children get books you know however many they want and they bring it home we did not have that kind of thing in bangladesh when i was growing up we had these sort of rickshaw pulling temporary libraries that will come to you and you can pay a dime and they will loan you books and you can go and read and return it the next week and sort of like that until the british council library i was introduced to the british council library now i couldn't go to the city because of course my mother wouldn't let me go by myself over there so i couldn't access it till i was a high school student so here is a picture of a library that you can see and this is particularly a library in a sabor like a suburb in bangladesh is sort of in between a village and a town so it would be like near a farmland a town where there are 450 people living in the u.s sort of like that place so you can see here that even the readers are elderly people who do not have access to newspapers so maybe the library keeps one so they will come and read the library students maybe you need a dictionary so they cannot buy one i mean not always everybody buys a dictionary so you will come and use it and on the last side you can see people looking through the day's newspaper this is a very recent picture because as you probably know with bangladesh there's a lot of flooding many of the places and many places are low-lying so there's always water during the munsoon season season june july august to help the students who live in those areas actually there are libraries inside the boat at the bottom here and that computer is inside the boat so it's satellite also satellite connection internet connection and you can check out your email but it's in a boat again so the concept of library and this is very recent it's uh 1990s afterwards i mean i happened to come on this picture from a friend's side who does studies we publish technology and the u.s and that's where i came across that this is what i'm most uh sort of familiar with these are libraries in the bus so there would be a bus that would come again sort of like one step ahead of the ones that we had when i was a child the rig shuffler is no longer available it's a bus driver and inside the bus it's just a library as you can see on the right hand corner the books are put into the shelf so these would be library and they would have lots of collections both bangla and english i mean there will be a variety of uh books that are available here and they would come and it's cheap it's like what um a dime less than a dime maybe a couple pennies to get a membership uh because the books that also cheap me the books that they buy are made in so published in cheap paper because you know sometimes uh responsibility sometimes accountability yeah that's that does not happen so books get lost so it certainly doesn't have uh affect a lot of their business so that's why um these books are available there and of course on the top is the british council library in dhaka there was a singular one in brazil but it closed down because of not enough people coming and reading books i was deciding by that when i was i think in my university year a second year student when the british council decided to shut down but still the one in tapa it still exists and you can access almost anything there both books that are published in virginia as well as in them at us so here is sort of an idea of how i grew up with the concept of library i wanted to talk to you about this because i remember karen talking about this when she was young and this was a library room and you know instantly i thought about what a difference uh between karen and uh dr youngberg and me in terms of understanding what a library concept is and i thought it would be nice to kind of show you that um though both of them are sort of not very different but not very similar but in many ways they're different um coming back to my love of charles dickens so after my after i grew up like i went to school and i started learning a little bit more english and then i would go my mother would take me shopping with her there was a age limit when mothers usually take their children to shopping so i i actually came up to them and i would go with my mom to shop and we would buy all sorts of things grocery clothes and all of those things and then we would pass through a book shop and i just recently learned that same bookshop closed down because the owner died at the bookshop i used to buy my books from and i was decided that thinking okay nobody wanted to continue i mean wasn't there anybody in the family that would like to continue i guess it's not a very popular business it seems and reminds us of why some of the bookstores here close out so i went to the bookstore and i saw a tale of two cities in bengali that's the title over there in bengali somebody had translated the book in bengali and it was in the bookstore and it cost 12 daca 12 daka again which is less than a dime but 12 taka from my mother was a lot because you could buy almost a one half a kilogram of lentil or a cup of sweets have a happy chloramine sweet something like that so i had to beg her literally better and i think she was afraid that i was creating a scene so she decided to give me that to altar she was really afraid of her children creating this scenes where you know we're asking for money and she's not giving us still now she owns a lot but yeah she has still the same kind of uh vision about us so i got that book i read it over two nights now again we had restrictions we couldn't read past 8 p.m okay whoever has that but we had to do that it was just a rule that we had to follow but i had a small lamp next to my table i would bring it to my bed under the mosquito net because i cannot read it outside there will be mosquitoes coming and biting me so and i read it every two nights again at that age i didn't understand what french revolution meant for shiblo was translated the bengali is foreign and it came again and again and again i didn't understand but what i loved about the book was sidney carter he was my hero he was in love with somebody else's wife i mean by that time i understood you know what love meant and you know what first time love meant and all of those things and it was that romantic side of the story that attracted me uh to first of all to uh charles dickens and possibly that's why uh the title came up for me a postcolonial love affair um with charles dickens and i fell in love with sydney curtin and i read and i read that book till i came to the next charles dickens book again which was in bengali and this time it was great expectations now great expectation had a different effect on me why because i was looking through the world looking at the world through peeps eyes i wanted to be a millionaire i just wanted somebody to give me some money and i was thinking if i had that money i could be in a fantastic city like england you know colonization british empire nothing came to my mind i didn't think of those consider those were not taught those but all i thought about was that i want to be rich and go to england the metropolis city perhaps that was the reason why thomas machule said and he actually um kind of bring in the rule in his education for india in 1852 that you must teach british literature and language to the indians if you want to permanently rule india okay now as a postcolonial scholar i feel perhaps that was the reason why thomas and actually thought you know nobody nobody really gave uh thoughts to that point that could it be an emotional attraction for these readers that could influence the empire to stay for a longer time in those periods but later on as a postcolonial study study scholar when i deconstructed uh charles defense i understood that perhaps that was the reason that he tried to get this rule settled in india pre-independence independent india okay so again reading charles dickens's great expectation what i did was that i walked in pete's shoes um and i i looked at it looked at the world through peep's eyes and it was absolutely um astonishing for me fascinating for me to understand uh the victorian society that tickets was trying to portray and then from great expectation i came to bleekhouse actually because i read when i was a graduate student and esther somersen i fell in love with her too because she was that she was such a humanitarian character she was such a role model and such sort of uh background of poverty and um suffering um i related to her in a different level you know more in the realistic land realm this time not so much romanticizing dickens's world because i'm sure england is not the way i perceived at the metropolis with all its grandeur and loveliness and there are other sides to it too which i came to understand through bleak house now uh the importance of sort of studying uh dickens in bangladesh and india uh what is it about uh a dickens that you know people still study it in south asian countries people still study it in in bangladesh and india even though empire no longer exists i want to kind of look at that and i'll show you a couple of things that i have here in the powerpoint but before doing that i would like to quote uh thomas macule and his visions for a um empire that should come under the british british colonization this is what he said in his minutes on education in 1835 uh makuta proposed that the indians must be taught british literature and language um and i quote him uh we must present do our best to form a class who may be interpreters between us and the millions whom we govern oppressive persons indian in blood and color but english in taste in opinions in morals and in intellect so basically what she was saying that you should create a gentleman the gentleman theory that i was introduced to in a great expectation right if you are a gentleman well do you become a snob or you know what entails behind being a gentleman and i think he was also trying to push that idea in bengali the gentlemen were called babu babus it was very popular so these people you would see in native indian clothes and they would speak english they would drink whiskey and they would listen to british music that came during that time so those were the class of people actually mentally targeted but of course you know there were other greater population that were just the people who were just influenced by the literary aspect of it and i think the literary aspect of charles dickens's text that still lives in those countries those colonized once colonized countries because if you remember i think she was reminding us that christmas is coming each time christmas comes the movie a christmas carol the books are there in the library renewed again the message that christmas carol brings is universal and i think a lot of the countries in south asia pick up those messages and one of the countries is my own native country so what bangladesh did this year because of the 200 years of celebration of charleston was that it made a a movie out of a christmas carol there it is that's the movie that's the advertisement of the movie film in bangladesh then change the title and this is what i found fascinating about using dickens in a post-colonial country to give out a message about the modern problem economy poverty you know love for money or who doesn't have that right but the board should be gone this film is based on a garments factory owner um garments factory is something that you might not understand it's the sweatshop corner sweatshirt is more popular right they're huge in bangladesh and people had made ma so much money by owning a garment structure that you cannot imagine um and how the karma's factory work in bangladesh is that the raw materials come from china the cartoon and all the other materials and then the finnish products come to bangladesh and arizona if you know the company here faded blurry those are the two companies that have garnered facts in bangladesh clothes come from bangladesh so anything and everything you wear i faded blurry it's made in bangladesh and so what the directors did was that sort of switch the story so this man is a karma subject owner and boy shock is the first month of the bengali new year so the bangla new year so what people do on that day is spend a lot of money in wearing beautiful clothes like orange red they will buy new clothes and lot of food um like 12 13 dish on the table it's just a celebration just as christmas or anything so during that celebration this man doesn't want to spend all the money at all he's not going to give any bonus to the gunners factory owner or anything right so what happens is that inevitable the ghosts come to visit him so that's that's the story and it's called so that's what's happening in bangladesh right at this moment because perhaps another aspect that i like about dickens is some of these messages are universal and one of them is here you know having too much money or you know being a miser or not sharing um and the directors thought that yes this is a time when we should show this now the funny thing about it is that since it's a satire nobody's gonna be angry with the director okay uh perhaps that's one of the reasons why dickens in his own time didn't mention australia didn't mention colonization in australia or any other parts of the world he just you know he just left it with an indication perhaps that could be one of the reasons that he never mentioned those clearly in his books but as i wanted to tell you that this is why it's important um in reading dickens because people are moving it people are using it in a much more cultural context this is something done in uh in india the mumbai chisel wheats again it's an adaptation of martin schezwitz by charles dickens and this is actually a radio show uh done by aisha nanak and if you go and google search it's probably still there because it's been aired through bbc and i listened to it to bbc and what this does is that it focuses on a catholic community in india and again the story kind of revolves around that catholic family a just the same story except the setting is different the cultural setting is different here so this is one of the things again that's being done on child's advocates this is sort of a map that i created myself very bad at even drawing circles but i kind of wanted to give you a glimpse of the global military history that i see of happening here with charles dickens now that's the united kingdom over there in the middle here at the bottom of the right hand side is bangladesh in india where i am here is australia and then over there in the united states okay here we have united states now why united states of course we know that he came for a visit and he wrote a lot of things about it but about this country as a whole but also because recent times and this is also sort of in the post-colonial uh studies um sort of area that focuses on is that rewriting of child stigmas now you saw the adaptation of charles dickens in cultural settings which i showed you before but many of the time many of uh charles dickens's writing has been rewritten and how i'm going to read uh tell you about couple of titles here that i have the historical novel a far better rest this was published in 2000 written by an american author susan elion and it focuses on again telling all the tale of two cities that same story um here uh the historical novel the cartoon chronicles the curious tale of flashman's true father this was actually published in 2010 by keith ledlar again it's a kind of retelling of the story sydney cardone i wouldn't want to read it because in this book carton has changed your mind at the end and he decides not to go into the beaten so no no uh but i think he has there are other issues here that you can kind of rewrite uh to at least get the voice again because of the messages that the book carries itself the third one is written by an australian writer and his name is peter carey and the title is called jack mans again it's rewriting a great expectation where magwitch uh the conflict actually uh tells us the story what happens not through peep's eye in great expectation it's um magwitch who tells us what's happened you know how he came to england and met peep and then what happened after that so i thought that i would sort of give you an idea how he's such a great writer and how he is still influencing not only south asia but also in the united states when stories are being rewritten so that completes my first part and my second part of uh on my lecture um i want to go to the third part where i said i'd like to focus on australia because if you know the history of that time convicts were sent to australia and there were plenty of movies uh based on that uh but i learned about this by picking up from great expectation and to my study of magnitch's character um what happened was that you could be sent to one of these colonies just by picking up a bread from a shop like a bite of bread if you pick it up and you are you you are declared as a convict and then you were sent to these colonies and that fascinated me from a very earlier time period and also the other part of it is that in dickens's writing and many other writings there were no mention of the aboriginal people at all that those people existed just like the native americans and they lived in uh lived in reservations none of those was those were referenced anywhere so i thought i would i would look into that history and again that's where i invested myself uh in my relationship with dickens i started kind of deconstructing pulling him apart pulling his text apart that doesn't mean that i fell out of love with him but i learned more things that why perhaps he couldn't say or how did he look at australia why was it important for him to send the convict to australia and bring him back why was so important and i kind of pondered on that question and then i realized that of course there were a lot of things working behind uh dickens's mind one of them was that who went to the colonies right other than the conflict the english people and if you do a little bit more background study which i did with my father and i have a sort of a major in history too so what happened was that it's usually people who did not have money in england would go to the colonies to make a name this is what happened in british india many of the times in australia too so they would go to these colonies exploit the space of that area and they would come back with money to england to live a life as if they were lords or barons of noble men right and if you know the structure the social structure of england you would know that how barons and lords are looked at how they're delivered and how they're considered as head of many things so that's one of the things i think dickens was criticizing but again he couldn't do it so how did he kind of portray through language was that magwitch had a lot of money and he came back he gave it to uh pete but pip wasn't successful neither was magwood she was caught again so it didn't really help magwitch having all those money he couldn't be a gentleman in that sense i mean he had a lot of internal conflict within himself and he resolved those through his melody but he wasn't the gentleman that madrid really wanted him to be who could walk you know uh wearing rich clothes with a stick in his hand and who could live a posh life in england that didn't happen so indirectly that's what dickens's message was that even if you have the money from the colonizer first of all that is not the right thing to do because you are sort of manipulating that space and second of all that's a need nobody can be reached with somebody else's money and that's that's a reality and you kind of have to feel that out the second thing i think diggins wanted to focus on was that the suffering of the children um and i showed you our textbook and the very early and my parkour presentation here uh where uh the writers of um astro bill ashcroft and this is his understanding um and he said that dickens sort of saw a similarity between the suffering children and the people in the colonies then naked people in this case aboriginals not the convicts uh not the people who went there uh to make money but the aborigines but there is no mention of that again but he felt that they were the same because the children in england were exploited if you think about all the other texts charles dickens takes where children work in factories um and in household places even peep because that is a very popular novel and from here all of you have read it even the work he has to do with his brother-in-law and to satisfy his sister also um and he saw the similarity between this um native other and the children in england but then again you know he wasn't writing a critical analysis it was a story that he was saying and how much could he have said uh through stories um so that those are the two links that i as a postcolonial studies reader kind of see so even if you had money it was no worth because the metropolis the center of british empire had its own idea about what is a gentleman or who is the person who has power um so in terms of power relationship in in power that was important for me to find out um this is a statistics actually and as i said this is called the southeast australia it should be south australian institute um which acted as library during that time period 1861-62 an interesting thing and i'm fascinated by it and sometimes i get goosebumps because the same same ship that carried the convicts carried the books to the colonies at the same time can you imagine how heart rendering that is over here and if you can see a list of uh book titles and all of these that you could borrow in 1661 and 1662 in south australia um adelaide i would think that's that's the place um now not all parts of australia had been colonized by the british in that way some of them were free states so adelaide i think was one of them and look who's at the top of the reading list in australia in 1861 and 1862 it's charles dickens same carrying the convicts carrying the books over here and then we have thomas matuid thomas barrington total loan was 124. so he was also red rare at that time um trollope was most famous of course because i think they were anthony trollope 492 because the australian population are at this is specific to adelaide so the people in advance they were more focusing on their matter really realistic sort of books um george eliot was also a popular one because george united made reference to new zealand uh so again uh that was a popular author uh that they were reading at that time period over there again you know it's hard i i actually got these statistics from uh a scholar named tim dolin who uh looked at uh british uh mid victorian literature in south australia how many people read those and i sort of i sort of thought okay that's interesting uh to see but he couldn't find any stance you know nobody kind of there is no book review uh there is no analysis of great expectation and reference to australia i doubt it that people were able to kind of um catch up to some of those little bits and pieces of information in the text or not so that that perhaps was the reason why they could not pick up these things and again just like me perhaps they were reading it for the romance of it perhaps they were reading because it gave a picture of victorian england that's what they're concerned about learning about another place not necessarily relating to the self um which as a postcolonial studies read scholar i learned to do very late in my life um i read heart of darkness and i thought i was marlow joseph conrad can you imagine then later on when i deconstructed him i don't know i'm not marlo i'm the other over there but it took time right so 1861-62 wasn't that period to understand who is the other all right it was more about adventure okay so that's the end of my slide all right so those are some of the things that i actually found out um about um not only through my love of charles dickens uh but also through my studies um as i said that i look at charles dickens almost in three levels right at this one moment my love for his literature uh still pip is in my my life you see his alive and what i tried to do was i tried to tell my daughter to read it i read that book to her when she was three years old but somehow she didn't kind of attach it i don't know perhaps because she read jane austen she's 14 and she's already read jane austen so i'm okay i think she she kind of considers herself the elizabeth um brown skinny elizabeth of the 21st century but um i i had the ambition that she would be attracted as i was but anyway she is attracted to some british author um jane austen uh i don't know anything about my son yet because all he thinks about is spiderman so we might need to kind of revisit uh what literary studies means with you perhaps being to augustana very early and give him some lessons um so today i would like to end uh this lecture by coding uh some of the lines and you probably all know it because it is very popularly uh quoted uh from a tale of two cities and it's my absolute favorite although i read it in bengali but this is in in english i'm not going to say it and these are a couple of the lines that i would like to say to you quoted from a tale of two cities it was the best of times it was the worst of times it was the age of wisdom it was the age of foolishness it was the epoch of belief it was the epoch of incredibility it was the season of light it was the season of darkness it was the spring of hope it was the winter of despair we had everything before us we had nothing before us we were all going direct to heaven we were all going direct the other way and for my for the reason the reason i chose this is because if you deconstruct some of the lines in the beginning it's us it's me saying because you know going back to the universal nature of dickens's writing dickens's novels um it is the past election and what's been going on through the world we can easily say that it is the best of times if you think think of technology if you think of where we've reached as human beings it is the best of time and then again if you look at the world politics we are at the same time at the worst of times right but we have even though we have a little bit of darkness the much more what we are interested in is the light that we can see at the end of the tunnel uh the aspiration uh the love for humanity uh perhaps that is what will stay with us and that's why i think you know dickens is an all-time writer you cannot exclude him from any period you cannot excluding him from any part of the world no matter if it's a postcolonial country or it's the metropolis which is which is britain he is sort of ever alive and i thought i would leave this lecture by quoting these lines to you thank you very much when i think about dickens i think about um um him sort of bringing to light to the plight of the poor in the in the factories and so forth and i think he's often credited with perhaps helping the industrialized world to move beyond the exploitation and do you see the popularity of dickens um in some of the developing area parts of the world as do you see that as bringing to light uh i mean we still have people who are being um taken advantage of exploited in factories uh you see uh i don't know do you see any parallels there yeah definitely i think you know in bangladesh for example i'm going to talk about bangladesh because that's where i live more they're still you know children are being manipulated they're still orphaned children are picked up and they're sometimes you know made into these uh thieves just as you've seen in dickens's world so and then the poverty is still there in a country like bangladesh there and then the other thing is the loss of property you know i didn't get to talk about the women characters i didn't have time i knew but that's another part that i love and we can do that some other time miss havisham in in great expectation holding on to that old house and all the wealth and all the people kind of coming and wanting money from her you know when they thought that she was crazy and she's going to die pretty soon that's an interesting thing too because in many parts of the world still women do not have rights to their property and uh i see definitely a parallel between that especially in terms of orphaned children um no matter if it's bangladesh or india um and actually aisha mendon mentions that that's there's so much of poverty and class difference in south asia that you can actually kind of switch dickens's world with our world still now and because of you know those problems that are still existing especially of poverty and the issues of orphan do you find a disconnect between the man and dickens the writer do do i you find the disconnect yeah disneyland's not matching together yeah yes actually uh as a scholar i guess uh that's one of the things and i think if you remember i don't know if you're here with dr carver's presentation here his lovely affair with the actress and which he probably denied or didn't talk about a lot and leaving his children his family his wife uh that's one of the things that this comes to my mind and the second one is that the way you portrayed women nowhere can you find a woman who is soft who's nice they're always a hard woman uh at least from couple of the readings that i have and especially you know a great expectation is coming to my mind again and again miss havisham stella even mrs joe and then he doesn't give any names many of the women characters are not given any name again mrs joe is mrs joe she doesn't have a certain name so you know i find those yes and i'm thinking why not i mean why don't you give mrs joe a name uh you could call her anything right bring by the hand or any type of thing that you can give in her yes definitely again definitely and i think um that's a paradoxical nature sometimes i feel that here is my love for this author and here is all these social realities about you so should i go ahead and read more of his biography and i'm just going to leave it because you know biographies are also they're differently written depends on who's writing the biography so there are a set of biographies that talk about the things that i have talked about defense the great man and there are biographies who point out these disconnections dissonance between the character the real social character and the author can you speak to the education of young women in bangladesh oh yeah sure yeah i taught there for three years uh there yeah um sometimes it is difficult because we're still under you know rules where the rules have been set so it takes a little bit of time to kind of uh do a little bit of uh shifting in i don't want to say bureaucracy but i'm going to go ahead and say like order of things things have been settled ordered uh in bangladesh things have been people have been doing things for a long time in a certain way it's hard to bet that about the three years that i was in in radha university that i taught over there that was one of my missions always to encourage women um and education and i try to point out um from teaching even the texts that you know you need to pick up these issues of how women are portrayed in not only these classics but also literature in your own country um and you kind of have to understand that position definitely yeah and if i ever get a chance to go back um actually yes i would like to do an extreme program with bangladesh too and perhaps in australia oh my god that would be my target my session would specifically focus on uh looking up these venues things that they have already read for example miss havisham pick up miss havisham and why not why isn't miss havisham the voice of that piece why couldn't she be she wasn't doing anything wrong i mean she was hurt so you know in that way i could easily do a session of that too |
english_literature_lectures | WILLIAM_COBBETT_and_CHARLES_DICKENS.txt | I hello and welcome to this week's history and contacts podcast we've been discussing Kaaba and Dickens as part of the Journalism course here at the University of Winchester I'm Sebastian Ferris and I'm joined here in the studio by Brian Thornton Chris Hari and Edmunds Griffin's to discuss the work of these two 19th century journalists with production work by Andrew giddings the lecture given by Brian Thornton today looked at the tremendous social and economic changes that took place in the British Isles after the Napoleonic war and how these were reflected in the journalism and literature of these times Brian suggested that this was the period when everything we know came to the fore a tale of two revolutions as it were how true is this brian I think it's a fair way to sort of introduce the subject of journalism in this in the 19th century we have to consider the French Revolution was a turning point in European history but it was people always summarize the the period as a has been defined by the two revolutions one political in France and second industrial in England and the Industrial Revolution was in some ways a response to the revolution in France and England was dragged into a war into the Napoleonic war it at first was involved in them paying for essentially paying for mercenaries to fight the war and essentially and then eventually was involved directly with troops on the ground in in France and culminated in the victory in waterloo and but what happened during the Napoleonic Wars was crucial and Britain became preeminent on the Seas there the Navy was had absolute power really it controlled the waterways and it blockaded the ports from France so that France couldn't export their markets there their whole in the market was crushed and and into that void stepped England steps and Britain and and it became dominant in its trade with Amara South America India the Far East it was this it was this period where the British Empire started while the European powers great European powers were being drained and destroyed in this in this set of Napoleonic period and Britain was thriving and and it was boom time in in England they were supplying the army and do these massive new markets were opening up during this time Britain was was building it as I said it was building in the Empire in India Singapore South Africa it was mainly on the back of the what they call the transatlantic triangular trade which was the transporting of slaves from Africa the and then taking cotton from the plantations in the south at the American South bringing them to places bringing this cotton to places like Manchester and Bristol Liverpool where this cotton was then refined and there was a finished product clothing was in exported to Africa and so it would saw the triangle began again so he was a time of intense change in England he was a time of political change as well because at this time there was Scotland was part of the Union and in 1801 ireland became part of the union and it became the united kingdom and there was a mass exodus from the land from the countryside people were driven from the land really because of the policy of enclosure this was where landowners expanded their fields were before this before this time there was an idea of the common land a land owner would have an area that was open to the local villagers who would make this area free for villagers to use so that they they could their their livestock could and to graze there and then they did so that it was possible for small farm owners to have a livelihood to sort of sustained a family to not live very very well but at least sort of keep off the breadline but after the enclosures the land it was it was seen that because of this new machinery big fields were needed and they needed to get these small small holders off the land enlarge the fields and so these people were driven driven into the cities in their hundreds of thousands so that the this city swelled and these cities weren't capable of taking this level of population and so around the edges they started to sort of crumble and things like sanitation infrastructure these were just swamped by the number of people that came into the cities people were the two individuals were talking about we're looking at the two different sides of this dramatic change one was the hobbit who looked at the countryside and two was Dickens who looked at their urban sphere what were these in particular what were these particular cities like and I mean it was Dickens who looked at at London and it was it was his obsession and this is the sort of the city that we think about when we were talking about the Industrial Revolution but really the center of the Industrial Revolution was Manchester Manchester this an incredible City this modern city the city that when people came to to look at it people from outside were just astonished and I mean Chris you're sort of you're an expert on this area of the Industrial Revolution is there anything you'd sort of like to address one well it's my hometown of my home city and I grew up there and in the 1960s I was a child and it was an incredibly grim place they were just starting to tear down the back to back housing sencha this shanty town had grown up in one generation at the end of the Napoleonic war population gone from about fifteen thousand two hundred and fifty thousand if you're scaling that up to the population the British Isles now in a population of the world that means a city of 50 million coming from nowhere a pretty bleep part of the world anyway is always raining and moorlands horrible and not like lovely Lake District or anything it's close milk malls around there a really grim and all these old Victorian buildings are like that were absolutely covered in soot because one thing about these cities was the extent to which they were polluted and in Bleak House which were reading they described these people whose job it is simply to sweep odia off the streets for fashionable ladies so they cross the street we would have to sweep all the crap out of out of the way unbelievably unhealthy and I would see going to see my grandparents in a particularly awful part of Salford they people old women bent double with with rickets and kind of chest diseases that they'd they'd gotten as children around about the year 9 19 after knowing the 1880s they would have been his old woman whose lungs and chest been completely destroyed by cotton because Manchester was all about cotton spinning mills and all these fibers you know kind of a bit like asbestosis all these lung diseases who just it was it was like hell it was like hell on earth and the only thing that was preserved or saved from those times these churches and there's dozens of these things and they all look the same there now haven't got any congregations they caught their dreadful places with these spouts spires covered in gargoyles and kind of black sort if the you know it's the vision of LS Lowry the only artist ever to come out of Manchester Manchester the place is almost no I culture whatsoever we have this one artist Lowry dreadful and so that'sthat's Manchester was the workshop of the world though briefly there about 1850 it was the most important place on earth really I'm one of the most interesting aspects of this period it was the so desperate attempt by the government to control these sort of colossal forces and one of the attempts that the government made was the Corn Laws I think that at Manchester in particular was sort of affected by by the Corn Laws can you sort of explain why that's right um the it's three the main severa civic monuments in Manchester and the main Civic thing is the free trade hall is named after |
english_literature_lectures | The_Mind_and_Times_of_Virginia_Woolf_Part_2_of_3.txt | the marriage between Virginia Steven and Leonard wolf which started in 1912 and lasted for the whole of the rest of her life was I think a very good marriage it was a marriage which began in total desperation because the minute they got married she became extremely ill and you can draw your own conclusions from that clearly her illness was triggered by um the new situation by the shock of having to come to terms with being a sexual being a sexual partner it seems clear that they did not have a normal if that's the word you want to use or um continuing sex life for a period of about 3 years on and off she was incarcerated she was under the care of nurses she tried to kill herself um and uh she was heavily treated with uh sedative drugs um and with a rest cure uh treatment which was a very fashionable one at the time you were put in a dark room you were made to drink milk and eat animal fat so they were to be left in the dark not allowed to talk anybody read or right when Virginia went off her head which she did about four times in her life it it was a a total transformation she uh was insulting cruel to the people she loved most like Leonard wolf she spattered people she um thought that Edward iith was coming to dinner when he'd been dead for 20 years she also had periods of mania of very very high exhilarated Peak periods where she talked wonderfully and sometimes wrote wonderfully and then gradually she emerged from this extraordinary traumas and was able to say as she did in one of her letters or her diary it's really great fun being mad you have the most wonderful ideas better than you do when you're same but it wasn't fun for her or for anybody else when it was happening although as a writer she was very good at portraying the instability of the mind and the way it flits this way and that and catches on some things and jumps over others I begin to loath my kind principally from looking at their faces in the tube really raw red beef and silver headings give me more pleasure to look upon Leonard felt it was the stress of Modern Life that was the cause of some of Virginia's breakdowns he moved to Richmond to hogo house where he felt she would be less likely to become OV excited by Society she didn't publish her first novel until 1915 when she was 33 and this was because she had worked at it and worked at it and worked at it all through her 20s all through her periods of mental breakdown um uh it had been a fantastically difficult novel for her to write this was the novel called The Voyage out because it was about uh her childhood and the loss of her mother uh and her becoming an adult when Virginia wolf was um still trying to write her first novel her sister entered upon a very radical period as a painter and cut out detail and representation she began painting portraits of people where the face is left empty was a sudden very daring method of representation because these portraits do convey character and Virginia was intrigued and I think gradually began to wonder if the same thing could happen in literature she was rarely attempting to describe people's relationships not in the way that they talk to each other or behave to each other but what they didn't say to each other what was in their minds it was the it say the method which was become known as a stream of Consciousness the body language without the body the day after my birthday in fact I'm 38 and happier today than I was yesterday having this afternoon arrived at some idea of a new form for a novel for I figure that the approach will be entirely different this time no scaffolding scarcely a brick to be seen all crepuscular but the heart the passion humor everything is bright as fire in the Mist the Wolves founded their own printing press and their own Publishing House called the Hogarth press in 1916 when they were in Richmond and the basement of the house in Richmond was the office of the press this was very very important for Virginia of because it meant she could publish her own work it also meant she could publish little books little sketches little stories things like Q Gardens and the mark on the wall with beautiful covers done by Vanessa and it freed her up to be an experimental writer on Sunday Leonard read through Jacob's room he thinks it my best work unlike any other novel neither of us knows what the public will think there's no doubt in my mind that I have found out how to begin at 40 to say something in my own voice from 1919 uh the Wolves lived in Sussex as well as in London they bought a rather small quite ordinary little cottage called Monk's house in the village of rodmill Virginia wolf wrote mostly in her garden shed which she called my Lodge and sometimes she would sit at a a table there but very often she would um write standing up she had a special desk made for somebody on their feet you see and it was in that Lodge that she rarely composed the greater part of her novels it was her sanctum the Sussex landscap was extremely important to Virginia wolf she walked all the time she was a great Walker like her father and you can imagine her striding over the Downs wearing a sort of terrible old hat and and shouting out loud the next paragraph of her novel because she used to talk out loud to get the rhythm of the sentences she had no children of her own and so she adopted for very short period at the time other people's children her nephew and niece of course and V's children my brother and myself and when she was coming to stay at long born my mother would say Virginia's coming for the night our immediate reaction was oh good everybody said oh hooray Virginia's coming to tea now we shall enjoy ourselves because she was very en livening and spiriting and then she would sit us down and interrogate us I remember once she said what has happened to you this morning and I would reply well nothing oh come on come on she would say what woke you up and I would reply it was the sun the sun coming through our bedroom window what sort of a son she would say a kindly son angry son we would answer that in some way then she was fascinated by the detail of how we dressed of course what she was doing was um Gathering copy she loved nothing so much but to have people that she like come to tea and to quiz them and to ask them every single detail about their lives and that intense curiosity is obviously part of what makes her a novelist |
english_literature_lectures | Claire_Tomalin_in_conversation_with_John_Mullan_at_British_Councils_Dickens_2012_in_Berlin.txt | [Applause] I'm very grateful for that extremely flattering introduction uh because I know I've come out of England and out of England biographers are not well regarded and uh we we are well aware of this not just in Germany but anywhere in Europe and so uh go sorry I've got the wrong bit of papers here uh uh we're always on on the defensive so John's U very nice words about my work are very charming and I thought I'd start by just saying a little bit about Dickens in Germany uh because he's not very much associated with Germany everyone knows he loved France and went to France a lot but I thought you might be amused um of course his work was translated from the pqu papers on and he was very grateful for that and he he had good friends in Germany um um he sent two of his sons actually uh to study in Germany his eldest adored son Charlie was sent to leig to learn German and to study to become a businessman and the very perceptive uh professor in laig wrote to Dickens and said um your your son is is a clever and Charming boy but I don't think he's set to be a businessman and Dickens was absolutely insistent that Charlie was to be a business businessman and went on forcing him to be a businessman and of course Charlie turned out to be hopeless and was always becoming bankrupt and Dickens was always bailing him out and in fact at the end of his life Charlie was bankrupt and Charlie's daughters in 1911 after Charlie's death had to turn to the nation and ask for a grant from the nation because they were so impoverished people who think writers make a lot of money take note of that um yes something quite irrelevant really but I just found out the Fontana teodor Fontana was a neighbor of Dickens in Tavistock Square in 1852 it's a useless piece of information because they never met but here's this great I would like to say they had an interesting conversation which I would like to be able to print in which Dickens told Fontana all sorts of things he never told anybody else but it is not so but Fontana of course is wonderful wonderful writer now just two more little things at the very end of dickens's life he was visited by uh a young woman called constant cross who wanted to become a writer and she wrote to Dickens and said could she come and see him and he said yes come he had rented a house High Park Place in the center of London and um in April 1870 so very close to dickens's death um she came to see him to ask his advice and she wrote an account of her conversation with him well we know that we can't always believe these are a gospel truth however this is what she wrote she said they talked of Gerta and lesing and he seemed pleased to find that I had not been attracted or influenced by Hina why why anyhow we know Dickens didn't care for H then he told her that he was contemplating a stay of two years in Germany in order that he might acquire the language colloquially wise Dickens he told me that he decided to go accompanied by his daughter to turingia so I suppose viar and the beautiful countryside and that he was much looking forward to the accomplishment of this plan well his daughter casy was married and his daughter M Mary never said a word about this so you wonder if the daughter he was P's daughter was a euphemism in this case and the young woman he was planning to go to Germany with was not a daughter but isn't that isn't that absolutely extraordinary that Dickens uh two months before he died was thinking or said or for a moment he was thinking that he would come here and live in Germany and wouldn't that be wonderful a wonderful imaginary story we could tell about Dickens in Germany perhaps Peter akroy could write it you know it's absolutely up his street uh finally my last little German point John fer dickens's great friend and great biographer tells us in his chapter on little doret that when bismar was ready to open fire on Paris in September 1870 shortly after the death of Dickens he met the French uh leader jul fav under the walls to negotiate what was going to happen and the great German Helmet Von vulka the general I think he was or maybe Field Marshal but anyhow the great military leader who had be in charge who of or the campaign and who was a brilliant linguist and a writer himself as bismar and Fa debated and discussed what was to happen together Fon mulka was sitting in the corner reading little dorit isn't that amazing so that is my my very very few little links now I want to let you hear the voice of Dickens in a letter so I'm going to read you a short letter um it's a letter he wrote in March 1844 his Elder married sister giving a short account of the what he'd been doing he'd been in Liverpool and he'd been in Birmingham and I'll explain to you later what he was doing there but this is one of my favorite short letters of Dickens Dickens was one of the great letter writers and the 12 volumes of his letters tell you so much about England in the mid 19 century and they also reveal uh Dickens as the Great letter writing performer I he performs his letters and the most they are absolutely amazing of course Dickens would have liked all those letters to have been destroyed we always have to remember that he would be horrified that there are 12 volumes of wonderfully edited of his letters but we can be grateful so my dear Fanny I left Liverpool at halfast 10 on Wednesday morning and reached Birmingham at about Hoff BOS 3 that was the new Railway then where I was received by some most excellent fellows and straightway conducted to the town hall which positively took away my breath Birmingham Town Hall was newly built it a greek temple huge greek temple it's still there it's not the town hall anymore raised up very high it's the most extraordinary and impressive building took dick his breath away a committee of ladies had decorated that immense building with artificial flowers to such an extent that it looked like a vast garden and on the front of the gallery facing the platform were the words welcome dick now Dickens used to call himself dick at this time very often uh in gigantic letters of the same material there were also transparencies representing diverse fames in the act of crowning diverse dicks to the Unspeakable admiration and Delight of the queen her refy was also up there Queen Victoria who looked on approvingly now he pauses and he begins to tell his sister what he'd been doing the night before in Liverpool having danced a sir Roger deav that is a very vigorous country dance which goes on for hours couples running up and down the room a s Roger de Cav of 40 couple until 3:00 in the morning I was rendered rather nervous by these Splendid preparations especially as I was vexed to have nobody with me to see them but I dined by myself at the Inn to took a pint of champagne and a pint of cherry dressed in The Magpie waste coat and was as hard as iron and as cool as a cucumber again at 10 minutes before 8 they fetched me the hall was crammed to the roof with all the First Rate Tes wigs and radicals in the town and ladies in full dress were standing for want of seats in all parts of the crowd the moment dick appeared the whole assembly stood up with a noise like the rustling of leaves in a wood and then began to cheer in the most terrific manner I ever heard beginning again and again and again when they at last left off dick dashed on and I must say that he delivered the best speech I ever heard him make so advice to writers if you're nervous about getting up to speak pint of champagne and a pint of cherry and you will be absolutely absolutely safe well I little bit of context for the letter because it is a marvelous letter and I've talked about him giving a performance he does present himself as a character doesn't he we see he's 32 we see his High Spirits we feel his High Spirits we know from this letter that he loved dancing he would D dance till 3 in the morning that he liked a good drink that he cared about his clothes The Magpie waste coat a famous waste coat of his that was black and white and he he as soon as Dickens had enough money to buy fine clothes for himself he he bought find close for himself although he thought of he is a Victorian novelist he was very like a regent Sandy Dickens he loved to to present himself um as a as a as a very and he he wore his hair very long had beautiful glossy hair well you can see it over there that wonderful mccc's picture of him um he looked very good um what else does that tell us well I must explain to you this letter 1844 it was the Hungary 40s uh England was in a recession people were unemployed people were hungry life was very hard and uh getting worse and it was very like life in England now it was worse than life in England now but uh there are great similarities with life now and what Dickens was doing in Liverpool and in Birmingham was this he was speaking to raise money for what were called Mechanics Institute or poly Technics they were set up by benevolent people for working men and working women also to have access to libraries to lectures and to uh education further education uh not only in the Arts but in the sciences and Engineering they were very very important uh places Dickens spent a lot of time traveling around helping to raise money in the industrial towns and many of the mechanics institutes developed later after dickens's death and became the great universities of Liverpool and Manchester and Birmingham and so on leads um so it was a wonderful thing he was doing and he maintained an interest and he went on when he began to do public readings he would go and read in those towns and he was loved he was loved by the people of those towns because they knew he was on their side and they also of course read him because he was absolutely accessible to all to all sorts of readers and his books came out in monthly Cals paper covers and they could afford to buy them and read them so he was he was that's why that's some of the reason why he was adored by the people of England um so the hungry forties well one thing he' just done when he went on those trips is he published A Christmas Carol and of course the Christmas carol is written for the hungry 40s and it's written for today Scrooge is a city man a rich city man uh and at the beginning of the story he is asked to contribute some money to help the unemployed to help the poor and he says uh well uh you know they're looked after aren't they um I help to support uh the workhouses and the treadmills the places where they can go when when they can't to manage anymore and he refuses to give the money um as The Story Goes On as you all know uh Scrooge is softened by The Experience he has experiences he has with the ghosts and the spirits he meets and he's taken to look at his own past and immediately he begins to change as a man and when he's with the spirit of Christmas present who's a very jolly Spirit um at this it's just coming to the end of the time with him out of the Spirit uh cloak come two wolfish CH children yellow a boy a girl yellow meager ragged scowling wolfish Where Angels might have sat enthroned devils lurked and Scrooge says to the spirit are they yours are they your children and the spirit says they are mans this boy is ignorance this girl is want beware them both but most of all beware this boy for on his brow I see that written which which is doom and he stretches his hand out to the city where the Doom will arise and Scrooge says have they no refuge or resource and then Dickens in one of his most wonderful rhetorical devices says makes the spirit say to Scrooge his own words are there no prisons are there no workhouses and that's how that chapter ends and it's a sort of it always makes my hairst on end almost when I read that because it's such a powerful piece of writing um and that must have been read also in those industrial towns which were full of unemployed and hungry children um Dickens uh wasn't only visiting uh the industrial towns he also made a point in London of going to the Ragged schools which was set up for Street children and uh he uh wanted to help raise money for them and he wrote a marvelous description to his friend Miss Coots of visiting a ragged school and um and he suggested that rather than trying to give them elements of religious education which he said they were in no position to understand these were children who were in and out of prison who had probably had no parents or if they did they were neglectful parents they knew nothing they had and he said to teach them religion was really not the point what they needed was a place to wash in the school this is another marvelous bit of Dickens Dickens saw once this is is what was actually needed somewhere to wash for those children and he um gives a splendid account of how he takes his hat off to the master of the school and all his beautiful shiny hair falls out and one of the boys says well you can tell he's not a barber and they all look at his shiny boots and they look at his his smart clothes and he obviously appeared a rather uh Godlike figure to them and he wasn't going to say to them that he had been really really quite like them not so many years before well he did these public good things he did private good things whenever a friend of his died which happened quite often leaving children uh with no resource Dickens would immediately raise money an actor called Edward Elton uh was drowned his wife was already dead there were seven children Dickens at once formed a committee raised thousands of pounds so that all those children would be given an education and a training so that they could live proper lives and he remained in touch with them Esther Elton the eldest became a went to a teacher training college and became a teacher and he was in correspondence with her 20 years later and they all wrote to him uh and many many years later to thank him it's pretty extraordinary that and he always did that when somebody died um well just a final tiny point about 1844 who was in Manchester in 1844 at this point Engles are preparing to write his condition of the English working class very great book Engles complained that in England only the writer Thomas carile was interested in the condition of the working class I can't help hoping just possibly Engles might have cast his eye over A Christmas Carol at some point so I think it's probably not very likely but I think that that uh puts uh puts that letter in context I would if you if you squeeze a letter uh by a writer you could it can sort of lead you along many paths um and uh that's what I've tried to do there um so now I'm with great pleasure going to join in conversation with John you've had enough of me standing up here thank you I bring my water with me I have yes we must we must displ the book mustn't we so pretty whatever is inside it it does look nice doesn't it um I was I was going to stop by asking you Claire about um dickens's attitude to your to your trade really biography because you talked about the letters surviving so compendiously but he would have liked to have seen them destroyed does that does it imply that beyond the authorized version he rather dreaded biography well I think like most writers he had very mixed feelings about biography he was offered um some notes about Charlotte Bronte when she died and he wrote very scathingly saying we don't want that sort of rubbish in in our magazine and household words you know I mean looking in the intimacies of people's lives no no but at just about the same time a little earlier actually he had asked his friend John fer when he was still Dickens was still in his 30s so he was thinking ahead he asked Foster to become his biographer and um it was a brilliant choice of course because they were two men who really loved each other they met young they both came from humble beginnings forer's father was a Newcastle Butcher and he was he went to a grammar school so he was better educated than Dickens but when they met in the 30s they both were sort of determined to conquer literary London and uh forcer saw that Dickens was a genius in he reviewed pck papers and he particularly noticed when he got to the prison scenes in pck papers he didn't know anything about dickens's background or his father but he saw that something in the writing got sort of somehow stronger and um and more interesting there and dickin saw that forer could help him um and indeed uh forer became effectively his his agent he read his proofs with him he gave him advice he also reviewed him he reviewed just about everything he wrote um he became the the family friend but he he and Dickens had a a really a great love for each other and letters from Dickens to forer that Dickens actually says this friendship will be till death us dupart he actually uses the word of the marriage service in England um and he trusted Forster all his life they had occasional quarrels but nothing really serious at the very end of dickens's life he was going round though he was very ill to read the next installment of Edwin drw to The Fosters Foster was by then married um and he told Foster about the secrets of his childhood he told fer everything about his private life when it was very secret to the world so he made a brilliant choice and I think Foster's biography is one of the great 19 there many great biographies written in the 19th century and forers is certainly one of them when Dickens died Foster said there is no more happiness for me uh it my happiness won't I won't ever be happy again that was the strength of his feeling but I have a duty and the duty was to write the biography and presume presumably Dickens would have expected forer would he to have made the right decisions about which of those secrets to keep and which to yes I mean forer was sometimes a bit too discreet I think for instance he just mentions that Dickens was going to set up a home for homeless women I for young prostitutes with Miss Coots and he says I will I will say more of this later but he never does I think he decided we can't really do that and naturally naturally there's a lot not in fer because forer couldn't possibly have written about the Turnin because they were still alive and the family nobody in those days nowadays people don't mind having an adulter in the family but in those days for a long time afterwards dick death no family would openly uh discuss such a thing so he's not to be blamed I think Foster for not not saying but he did he did he did write a bit about about his childhood and Dickens seems to have I mean obviously dick gave him gave him his written account so he wanted people to know about his childhood didn't he yes he did and I mean writing the biography of him did you I mean it seems reading it to me that um you you found dickens's childhood quite as kind of important and and sort of vivid as we we you know as the story that's been told before it's absolutely fascinating yes one of the things that struck me very much he was called Charles after his maternal grandfather Charles Barrow Charles Barrow worked for the Navy pay office where now in 1812 the Napoleonic Wars are going on the Navy is the biggest employer in England so his maternal grandfather and his uncles and his own father also worked in the Navy pay office that's how he met his mother um but uh Charles Dickens never met his maternal grandfather because just before he was born he was found to have been eded Bing he had a high position he'd been embezzling money from the office for years and his his explanation was that he had a very large family very large family and uh he hoped he would be forgiven he wasn't he had to flee the country and I always think that at the very beginning of Charles dickens's life there was this sort of secret that was never mentioned you know there was a grandfather who was not in England and was never never mentioned never uh just didn't exist any more and I think that's an interesting beginning for the life of a novelist um and then you get another secret when um he was put to work in London in the blacking Factory as a small boy and it was a terrible shame and humiliation to him when that was over his parents who had who had agreed to this happening he tells us never mentioned it again it was another secret everybody knew it had happened he knew they knew but it couldn't be discussed I think for a novelist the mind of a novelist the imagination of a novelist have these secrets must have been must have sort of formed the way he thought about human behavior and human life and then when at the in the last 12 years of his life his life became a tissue of Secrets and deceptions and things that couldn't be said he was sort of in that territory again very familiar territory uh I I find that well fascinating and and and do you do you I mean it's quite difficult I presume presumably in a way when you're writing a bi biography it is an exercise of sympathy there comes a stage when he becomes involved in Secrets as you say where it must it's must be qu must been quite difficult I mean when his marriage starts breaking down it is quite difficult to sustain that sympathy isn't it well there are two things going on there I mean the Secrets fine the bad behavior to Catherine not I think Dickens was not just a genius who greatest creator of character since Shakespeare and so on all the things you said as a writer I think he was also an intensely good man and a man who wished to be good and who wished to do good and indeed did much good as well as being a wonderful writer I think when he decided the marriage he couldn't stay in the marriage and he fell in love with turn he had a great public Persona a figure who seemed to represent the domestic virtues and the other virtues and he sort of Desperately wanted appear to be in the right I think many people do who go through crisis of that sort um men especially perhaps they want to sort of emerge as as the one who's right and so it made him behave really badly and write lying accounts of Katherine to miss cots and and say things about her being mad which were not true um but one one sees it happen and so it was it's peculiarly painful I did find it I need I say in my book I sort of want to avert my eyes and his daughter Katie his very intelligent daughter Katie who gave a good accounts of all this said my father behaved like a madman and he obviously did but then we say I mean biographers are not there to tick people off and to say how bad they are um they have to try and understand they have to try and uh be be tolerant of their subjects I think um because it's not interesting to to wrap them over the knuckles but even that sort of as you say you say somewhere you actually there's a great sentence in your biography he was determined to be in the right about everything and which stated boldly sounds like a rather unsympathetic thing about a person but actually it's the funny thing is I was thinking as I was reading that about his novels and and it's quite a it's quite a double-sided thing in a way isn't it because that's also somehow that's part of the force of his writing but he has perhaps not always to contemporary taste or perhaps it's slightly embarrassing sometimes but there's something rather wonderful about his determination to be right as well as something potentially rather unlikable yes yes he could be a monster like one of his own monsters like quilp or or or squares you know with this huge energy I think his energy is one of the most extraordinary things about him the fact that uh not just the walking 15 20 miles a day absolutely necessary to him I think walking he sort of thought he he imagined as he walked and produced these voices you've talked about these app I mean he his books come through voices don't they his which he he's thought of but if you think that um he began his career by writing novels in tandem I mean he halfway through pck papers he started writing Oliver Twist each month therefore he had to produce two quite separate installments um I mean it's bad enough to be writing a seral anyhow but to be doing two completely different Cals when he'd finished pck papers he started nickelby so then he had to write an installment of twist and an installment of nickelby I can't think I don't know maybe some of the writers here have done this I cannot think of any writer I know of who was able to do that then um uh about 1850 had career going he reached Financial stability and success Copperfield after dby he he was Secure he started on a second career uh he he set up a magazine a Weekly Magazine he was the editor he found the contributors he wrote a lot of contributions so he was running two careers he was continuing to write novels and he was running a Weekly Magazine and then in 1858 he took up a third career as a performer as a reader so for the last 12 years of his life he was writing great novels he was running his Weekly Magazine writing journalism dealing with contributors and he was stomping around doing these readings um it's it's he was a prodigy wasn't he I mean what what's what what seems extraordinary is not just that he was like this but in a way that he knew he was like this he said I mean one thing that when you when when one reads a biography which I in a way wasn't conscious of before was that um I was conscious of him as this extraordinary Dynamo but he keeps saying I am an extraordinary Dynamo sort of thing there's a wonderful uh there's a there's a bit of a letter here you quote to to to Mary Bole and he's he says uh constituted to do the work that is in me I am a man full of passion and energy and my own wild way that I must go is often at the best wild enough um I know he's not just talking about writing there but but but you know as I am constituted he's very he thinks of himself as this extraordinary yes he uses that phrase about going his own Wild Way several times in his letters it's obviously one he he sort of found absolutely apt to describe himself I mean what I what one loves about him too how he when his old school Master sent him a snuffbox when he' just written pck inscribed to the inimitable BOS he then started calling himself the inimitable and people family say apologetically well it was a joke but I say I don't think it really was a joke because he knew he could do he could do things that no one else could do he knew he was inimitable yeah and and I mean do you think he was what do you think about I mean we don't dwell on on his marriage too much but but the the the the secrecy that dominated the the last part of his life quite strikingly reading the bi biography too one becomes aware that amongst these male friends he had lots of these male friends contrary to what one sort of supposes about the 19th century lots of people live in quite um unconventional lives sexually and maritally and and obviously kind of being able to to do that without any you know in the circles they moved without any greater progam or losing their jobs or not being read if they were writers and I almost started thinking well well dick would Dickens have been I mean because of course there the George Elliot story know the publisher her publisher was terrified that people would find out that she was living with a married man but then people did find out and they carried on buying her books would Dickens have been so was his image so tender do you think I think it was to himself yes yes I you're quite right I mean wilky Collins had two Mistresses fi had two families crook shank had two families there was a great deal of that but Dickens had a very special place in by by the time that happened he really was so much the man who stood for the domestic virtues for Happy Christmas where everyone comes together everyone forgives everyone I mean the contrast between what he says about Christmas and and what actually happened his late late Christmases when all you know his mother wasn't there and his wife wasn't there and uh no no quarrels had been made up it's quite uh quite painful but I yes I mean perhaps had he had he lived on and come to tringa for two years you know something else might have happened it's true but it obviously wasn't possible for him at that point and it must be very difficult I mean it's obviously quite difficult to bring alive what what it was like for for Catherine to be married to this man well I think she was simply squashed um there was no space for her I mean she was Ian when they were Eng G he had been in love with marabel and suffered terribly being jilted by her and he wanted to get married he wanted to put his life in order and here was this nice pink cheek docile girl with an interesting family from Edinburgh who father had known Walter Scott and so they got engaged and she threw a little tantrum and he wrote her an absolutely chilling letter saying if she was going to go on like that it was no good there will be no second warning he said he said to her in this love letter and uh and they got married and she was pregnant in the first month and she was pregnant most of the time afterwards and Dickens Dickens was the the inimitable he decided everything where they were to live how they were to live who they were to see who their friends were um he was a man of obsessions um and the one thing he he he he wanted to have three children he didn't he was very happy when he had Charlie and M and Katie and he didn't want anymore and then more and more babies came the one good bit of their marriage it's interesting was when they went to America in 1842 Dickens wanted her to go with him and amazingly she was not pregnant they were traveling 2,000 miles around around America and so he wasn't writing he didn't have um any of his men friends around any of his Mal social life that he lived in London and uh they got on extremely well and he wrote letters saying you know she's really an out and outter you know she doesn't complain and she's absolutely terrific and uh for that period they really it's it's rather sad that they seem to have a cheerful relationship probably no sex um which who could who can know but you know it is striking that she didn't well she didn't get I mean it's the one time in the marriage when she doesn't get pregnant and she doesn't get pregnant again until after they're back in England so that they were only happy when they stopped having sex together well when they were able to be companionable and they were neither of them anxious what you were saying about the his friends and obviously he was a he was sort of a man's man I mean he loved male clubbable male that's the way social life was yes that's right but it was the way social life was ordered in those days yes they had their and they went out together and they they did all things they went on jauns the Dickens was wonderfully conval what it trilling says the mere record of his conviviality is exhausting such a wonderful wonderful love but but it's almost in that in that American Trippers though sort of for the first time he's actually thrown back on the company of his wife yes and and discovers that you know she's quite an interesting human I don't he's not actually very she wasn't a very interesting she wasn't interest she was sweet and dignified and docile and I think physically having all those babies you know and the terrible thing one of the things I did find out was that they all went to wet nurses so she had the babies and then the babies went off to the wet nurse when you think of Polly in dby wonderful account of a wetness most beautiful bit of writing that of course Richards yes Richards yes Richards yes got to be Richards and then you're sack um I and then then Georgina her much much younger sister becoming dickens's little pet and you know sort of being the Lively person in the household I think po you can't help feeling sorry for Catherine but it doesn't make her interesting either what about what about Dickens the the father because of course you know in the in the novels one of the extraordinary things he brings alive is that is what it's like to be a child what it's like to be a loved child but also a neglected child and and um it's seems if you read his novels you would think that the relationship between a parent and their child was a kind of holy thing I mean what was his what did you what impression did you end up with of Dickens as the sort of head of the patriarch of this very large tribe I think yes I think before you look at him as a father you have to think about him and his father because that's one of the most interesting relationships in his life he obviously adored his father when he was a little boy and then when the idil the five idilic years in Kent ended where he was sent to school and he was encouraged to to sing and recite with his sister you put up on the table in the pub and his father was was really interested in him took him on the yacht on the Midway and that sort of thing got to London his father in trouble and suddenly everything changes and his father there creditors not he on his father can't cope his father is in Hopeless debt um and goes off to prison um and uh Dickens is heartbroken for his father but he also feels a sort of resentment that the neglect he has had and he gets two more years of schooling after and then his father is again in debt and he has to go off and become a a clerk then as soon as Dickens becomes successful his father starts wanting money and Dickens became to hate his father at that point and he writes extraordinary letters about his dad father and what a devil he is even know how he's absolutely terrible and just expecting money from him and he puts notices in the papers and um and that goes on for quite a long time and and then there's another Vault fast when Dickens very briefly became a newspaper editor for about 3 weeks he appointed his father to be in charge of the reporters on the paper and suddenly his father is a wonderful old man and adored in Fleet Street and does his job perfectly Dickens leaves the paper his father goes on working there and there's no more trouble and by the time his father comes to die Dickens Dickens and is there his father has his terrible operation weltering in Blood and Dickens stays with him till he dies and then he puts his arms around his mother and they weep bitterly together and then Dickens walks the streets for three nights in his distress and plans the funeral and then years later he says to forer my father was a better man than I ever even realized and what he does with his father in the books is extraordinary that in David Copperfield his father is Mr mccoba he is therefore not responsible for all the bad things that happened to David Copperfield that's Mr mdstone so Mr mccoba has his father's charming and ridiculous Manner and speech um but is actually holy benevolent figure and then years go by and he writes little doret and there is William doret in the Marshall C prison and this is not at all such a tender portrait because William dor is actually is not a very nice man he's um his he he behaves very badly to his to his daughter but little doret who is not a realistic figure it's a sort of emblematic figure I think you can see her as in a way Dickens the thing about little doret is that she forgives her father everything she loves him without Reserve she's starves herself that he shall eat she she is perfect and you I feel maybe Dickens was in a sense recreating this situation with the child who behaved as he sort of perhaps felt for her father what he wished he had been able to feel for his father I I mean it's complicated but it's very interesting I think he never drew characters from life I think but he used them in those interesting ways and did that do you think that sort of distorted his own own yes I'm sorry I haven't answer your question well well yes the sons I mean he was he was he loved his daughters without reserve and they were wonderful his sons even Charlie the adored Charlie whom he when he was LE says he takes after me he's going to be like me he's going and Miss CS insisted on sending him to Eaton and Dickens didn't really like him being at eaten and took him away from eaten very quickly Charlie just could not Charlie was had the latitude as Dickens said of his mother and all the sons except one had that latitude and Dickens couldn't understand why they hadn't got his energy why they didn't want to make their way in the world he couldn't see that if you're brought up in very comfortable circumstances and not very much is demanded of you um there's no very great motive for fighting your way in the world and he got he was really miserable about his sons he tried to get them jobs he tried to make them do things things um and he said that he felt he never was able to show them love and then one the penultimate son Henry uh brilliantly defied uh Dickens who wanted him to go off to India or something and said he wanted to go to university and Di said well I'll talk to your Headmaster and I'm I can't afford to send people to University unless I know they're going to do and uh and the Headmaster said yes he's he's a clever boy and Henry did Henry dick was absurdly proud of him he was still at University when Dickens died of course he went on to become Sir Henry Dickens and all that Henry Fielding Dickens Henry Fielding Henry Fielding dick absolutely I don't know if there's anybody here who who's called their progeny Charles dickins Schmid or something but that's a it's a convention we don't which we might bring alive again yes he never called any of them after his father John they all had uh Sydney Smith uh called my daughter Jane Austin melon should night Alfred Bull Litton Dickens yes uh Alfred dors Tennyson Dickens Walter Savage Lander Dickens yes one after another um he wasn't a good father he was it's it's more than time probably now when I should I think we've got have we got some traveling microphones or something in the a my microphone aha they've been gathered by um if if if anybody here or they're coming they're coming if anybody here would like to to to to ask CLA anything or say anything in respon what she said or no no no um please do oh there's a lady at the front here do I need to stand up no you don't need to I think we can hear if you want to no um I mean dickas was so adored at his time but um I mean the way he treated his wife why was that there no criticism at all or was there any criticism of the way he treated his wife I mean it was he was quite public um in his treatment of her was there was criticism in among in a circle um he fell out with a great many of his friends at that time I mean one of the things that struck me when I was writing the B was how absolute the change in his life was I mean there were loyal friends like forer and others but there were um those who who did take Katherine side but they were all discreet they didn't blab I mean there were a few newspaper stories but basically they they they kept quiet but um I mean Zary didn't count on Mrs gasal was very shocked by what he did uh you find a lot of Harriet Martino wrote extremely sort of hostile account of his behavior um but somehow he survived and you know still now when I go to some places there are people who hate me because I have written about Dickens in a way which they feel I wrote book called The Invisible Woman which was about his relationship with Turner they feel that I'm sort of attacking a figure who should not be attacked I wasn't out I didn't set out to attack him and a very curious thing is that everybody knows dickin what he did how he behaved towards Catherine um so that is in the public record so but there are Dickenson in England who for whom dickin is a sort of an idol a SA who must not must not there are still people for instance I mean of course one doesn't know for certain about these things but there are still people for whom it's very important that dickens's relationship with Nelly turnon was unconsummated there there are some people who who know a lot about Dickens although Henry and Katie both said that there was a relationship and there was a child who died and they were deak's two most intelligent and Long Live children so I mean I I I'm baffled by that Yes actually everybody dick and Scholars did accept it um people like um Philip Collins uh uh I think Peter akroy sort of introduced this theory that it was an unconsummated relationship um which which I find baffling yes but um sorry what about the public um I mean if Dickens had lived today I mean Hello magazine would have had a field day absolutely AP it would have been fine because today people don't mind about these things in fact had glamour you know that the stars are expected to behave like this aren't they they would have said with his lovely wife Catherine and then at a certain stage they would have said they're very great friends still and then they would have said with his wonderful new partner Nelly I mean it was unpro but I mean at the time I mean the public didn't they wer weren't they sort of bothered At All by by his behavior or well evidently not because when he went r i I'm sure of course there were some people who were as I already mentioned but when he went round lecturing uh he got adoring crowds rightly because he was wonderful performer and he was a great reader you know CLA I'd like to ask you about your work as a biographer um you now are totally absorbed in Dickens and we can profit from hearing you you know spout stories like you've met the man uh but you've done this before with other great writers Jane Austin peep so your life must be a seral it's like serial dating in a way and a very intense contact with these people could you talk a bit about the intensity and the way that you need to shed these people as well serial monogamy I was think going to say serial monogamy I once said in a talk it is rather like being married one day you feel very keen on your subject and the next day you hate them and my husband was sitting at the back of the audience and I noticed people turning around looking looking at his expression well I don't know yes you you you immerse yourself in the subject and it's absolutely extraordinary experience um and you never shed them that is is I think that's the point you I feel I have this family I live with because you go on finding out things about them uh you go on being asked about them other people write interesting things about them but they they have been so intensely a part of your life um I think writing about writers is more is is more difficult I mean I wrote in a way Mary wonr was a great thinker and she was a writer but her life I was that was more historical and uh biography in a sense I'd quite like to go back to that uh to writing uh more historically um peeps I found wonderful to write about because the 17th century the history of 17th century is the most is the most interesting Century in English History we got rid of the king we got rid of the Lords we got rid of the Bishops we changed our society completely and peeps lived through all that and then it all came back of course and that was interesting too um but I find the 19th century quite difficult because I feel in some sense a blanket a gray blanket came over Society um I'm mean one of you know and Zary and other writers complained that there were things they could not write about in England where the French could and it was one they couldn't really write an account of what a young man's development was um I was straying from your question how do I do it I don't know I mean I got I don't know how how I've been very lucky I've been able to do what I wanted I've been able to explore um subjects that have really I really wanted to explore and spend time with them and read I mean for Dickens you know those 12 volumes of the letters rereading all his books rereading all his journalism um it was a marvelous thing to do you talk about being paid uh being paid to read in a sense you know I'm lucky enough to be able to spend five years uh working at something that I find really interesting um there are moments of Despair and and of course but yeah I do enjoy it yes please this is probably the most obvious question that you've been asked most so I apologize and advance but I haven't found an answer for most biographers and Scholars the most important or significant piece of um information about dickens's life is his childhood so why did you decide to leave out his childhood and start his biography with his adult life well I didn't leave out his child I decided that biographies can be a little difficult at the beginning if you've got a list of ancestors and the well-known childhood facts I wanted to take my reader by the hand and say this is Dickens at the high point of his life as a man I want to introduce you to this man and 1840 and there this uh it seemed a very good bit I thought very hard about this changed my mind went back and forth and I thought to be able to sort of give a a a characterization uh a lot of information about Dickens as a man um was a good way to start the book but then if you read on you will find his childhood his childhood does appear in the book and I do give a lot of space Not only to his childhood but to his parents [Laughter] well take courage it's quite a it's quite a it's quite a short opening but when you tle a new subject no now I have a loud voice I don't when tackle a new subject at which stage do you read previous biographies if you do apart from an obvious Cas as well before I wrote The Invisible Woman I already bought my first edition of Forster and it's practically Fallen to bits now I'm going to have to have it re rebound I've over the years I've been really like Antonio I was so pleased when you said you started reading Dickens when he was six because I said to an interviewer recently he said when did you start I said I certainly by seven I said then I thought afterwards perhaps I've invented that but I don't think I have but my first book was not pwi it was David Copperfield and my mother like your mother was a great Dickens lover so um so I was interested in Dickens very early on I read by biographies of Dickens I mean there is the books about Dickens are endless I read criticism of Dickens I mean at Cambridge I read Edmund Wilson the two scrooges I read Edgar Johnson I read unipo Hennessy I read gissing wonderful book on dickin I mean the chesteron all those books and I've reread them all and then um I read Peter acro's book when he came came out the Terrible Things Michael Slater is a great friend of mine he's a really wonderful man and a wonderful scholar and I said to him I can't read your book while I'm writing my book and Michael was so sweet and nice um and understood that uh the the fact is you read you read whenever you can on subject you're interested in um and then you reread for the for the for the when you're actually writing your book you you do your rereading it sounds like you don't particularly you don't sort of read kind of recent and contemporary as it were rival biographies as a as a as a sort of um way of gauging your distance from what other people are saying just Do I think you have to be a bit careful when you're actually writing just at a certain point to throw yourself into what you were committing yourself to um I don't say I didn't occasionally sneak a look at at Cy to see what did he say about that oh goodness no or wonderful you know gosh yes yes I know when I was working on peeps I had Arthur brand and I and you have moments some think this is really good why on Earth am I why on Earth am I doing it you have to sort of you always have to keep your spirits up writing bi there's always a reason for not writing of biography uh quite apart from what people think of biography there's always a reason for saying oh well no don't do it and so you have to sort of keep going okay what would what would dickons be writing now well one answer to that is he was already writing then when for today um his his whole way of creating creating London creating a world um and uh creating character is it seems to me it's it's really as relevant today as it was then but there's another answer you could give when when Dickens was uh approaching the end of his life he told a friend that if he had his life again he would have liked to have spent it running a theater he would have liked to have had whole complete charge of a theater organized everything the castings that and I suddenly thought Nicholas heitner might move over from the National Theater and you can absolutely imagine Dickens running the National Theater with with tremendous energy and so and coming back to the other biographies was there a definite element uh you found missing in the other biographies which made you write a new biography now containing the central element which you sort of found somehow lacking well one of the things I wanted to do was to give John fer his due John fer doesn't appear very much in the biographies and it seems to me that he is the most important figure in dickens's life actually he's absolutely there from the mid 1830s to the end and what Dickens told him you know his is absolutely crucial and um the use he made of it is absolutely wonderful and if you go I mean for he advised Dickens uh he advised he advised him to kill off little Nell huge success all over the world he when uh he advised him to write David Copperfield in the first person and the something I've thought about that since I wrote the book um David Copperfield seems to be one of the great novels of the 19th century because For the First Time a child's sensibility a child's voice is given its due is taken absolutely seriously and it's the most extraordinary account of childhood now I say it was the first it wasn't because Charlotte Bronte had published Jaye a just before dickins didn't read the brontes he he didn't he didn't want to read the brones but John forer did because he was a Publishers reader and he read everything and I do wonder I don't know this is just my speculation but I do wonder whether having J A it didn't perhaps influence joh fer to make that recommendation to Dickens that he should write David Copperfield in the first person and if that was so it is it is rather marvelous those two great novels which were read all over Europe I mean tul sto was influenced by them both you know they I like to see them so with perhaps with John forer joining them um he he he was always I think almost always right in his advice I mean he didn't like the bul lon vulgar bull bul Lon asked Dickens to change the end of Great Expectations Foster preserved Dickens original trag tragedy comic ending [Music] um I I that was one of the things I wanted to do I wanted to make Foster uh really be seen for for what I think he was in my biography I think oh we've got time perhaps for one more thank you um hello my question is precisely about Foster um you told us before that Foster promised to discuss dickens's involvement in that project of the home for homeless women but then he didn't deliver and uh I was wondering if you could elaborate on that idea on his involvement um not only in that project but how he could represent somehow or to what extent he could represent that idea that the the topic of the Fallen woman and how um the male middle class um idea could could save um that the The Prostitute the idea of the Fallen woman how Dickens represents that um Victorian topic and I have a second question if I may uh which is about I'm already a bit confused by your first question you were well I mean I said that forer I don't know why he didn't go on about it but I suspect he decided that because of what people some people knew about Dickens having separated from his wife that to write about his involvement with a home for fallen women would perhaps not be desirable in his biography I mean I think he was just being cautious but I just couldn't help noticing that he'd sort of he' he'd put out a little thing because they It Was Written in three volumes he' sort of said he was going as to dickens's representations of Fallen women that is extremely interesting but a very large topic Dickens knew he even reports some of the conversation of those girls in the home there's a very good book by Jenny Hartley called Dickens in the house of Fallen women which is not what it was called how it but um it and Dickens had a problem that his Fallen women are all melodramatic stereotypes St taken from the stage I mean Dickens as you know went to the theater all the time and and uh they're all they all sort of beat their breasts and say they're going to throw themselves in the river and say completely unrealistic things Nancy and Nancy uh uh Martha and David Copperfield um well uh Mrs the same Edith dby well she's not a fallen woman but she that's her cousin or something oh Alice yes she and little Emily I mean I always wonder when um in little Emily in in David Copperfield when their children playing on the beach and little Emily runs out on a spa and David Copperfield is made to say were times I wondered whether it wouldn't have been better if she had fallen into the sea then what happened to her well what happened to her was that she was seduced by a gentleman this it was a very favorite book of Dickens which he certainly read with Nelly Turman and I always wonder what went on in their minds when they read that passage that it would be better for little Emily to have fallen into the sea and drowned than to have had a love affair it's it's there there is some disjunction going on in dickens's mind about that particular topic I think which betrays itself in his presentation however I don't want to criticize him too much because you can easily open any of dickens's novels and find a page somewhere which makes you sort of think oh he's not at his best here but that doesn't matter because each of those great novels I mean Bleek house little dor arut those condition of England novels they're flawed but they are nevertheless great great because they they create this wonderful world they have they are written in for the most if you think of the beginning of Bleak House fog and that description of London and uh Miss flight I mean shakes um Shakespeare was well known to Dickens Dickens was sometimes a poet in his writing Miss flight with her Birds Madness despair his wonderful uh poetic imagination of Dick Mr VES the lawyer like a snake who swallows his victims you know there's one treasure after another as you read and it's it's it's very rich stuff well that's how we should end isn't it with a per peroration and um thanks very much and I and we have a little coffee break now just um I'm sure we all want to show how much we enjoyed listening to CLA thank you thanks how many times have I interviewed |
english_literature_lectures | A_S_Byatt_and_Denis_Scheck_at_the_British_Councils_Literature_Seminar_in_Berlin.txt | good evening couldn't aren't leave a laser in laser phone Charles Dickens and laser in noon Glaser from AS Byatt we lose noon octave elephant booboo FM that was most interesting thank you very much but being the fourth to talk to night one fields as a German literary critic a little bit like a Fed version of uriah heep out of David Copperfield entering the stage with a pile of books in my hand and stating with all humbleness or by the strong German accent like an SS officer out of a Hollywood movie we have raised to make you read and this is exactly what we intend to do tonight exploring ways of reading Charles Dickens and ways of reading a aspire to my have the privilege to introduce to you AS Byatt published her first novel in 1964 and has become a household name certainly to British but also to German readers ever since her novel procession which arrived in Germany in the afterglow of receiving the most prestigious literary award in Great Britain the Booker Prize in 1992 years later Dame Antonio published two novellas under the title angels and insects and from one of those the console angel we will hear reading tonight of course by now AS Byatt has established herself with her novels for example of four books about fredericka potter and with her short stories and with her essays as one of the let's say five authors from Great Britain everyone in the world needs to read today now don't press me to name the other four right as I have in mind Uriah Heep would probably say that his humbleness four bits him to do that what I will mention though is why AS Byatt stands at the forefront of contemporary British friction today in my opinion it has to do with her use of the novel as an optical instrument a telescope direct to our past and reflecting at the same time our present as demonstrated so ask to follow in her recent novel the children's book aspired herself has stated her poetics in an interview with the German journalist she said there in German thats related big sticks diversa super my neighborhood Suzanne gift east as if a sueance of all you got a sky asleep as Arctic SL shaft or need to let steep its human dimension under alain de su reflect irin Charles Dickens is famous for his repeated observation the world was so much smaller than we thought it we were also connected without knowing it debate is super kleiner as we eloped in via LaVon only on service in chicks I'll half million under for Clippard this observation is often quoted to stress the qualities as an author with an acute awareness of the changes in the transportation communication and media system of his time I don't really want to make Charles Dickens a prophet of the social media but our gathering here tonight the humble bertelsmann blaze under inland in Berlin establishes the validity of this concept of our mutual connectedness by Dickens insofar as there is indeed a connection between charles dickens and AS Byatt invisible at first but in no ways flimsy their missing link is a common literary translator which they share melanie rights i was told she is present tonight so Melanie bells please stand up and I begged the audience there she is to give her a very warm kind of her blouse not only not only for giving us nearly the complete work of a aspired in Germany for over many years but also for a brilliant new translation of great expectations that he can stumble from which AS Byatt would like to start her reading tonight now payment only a welcome to Berlin nice to have you back I know that you talk German so I better mind my business I didn't talk very well I can read it we promise you that payment owner will end the evening by singing popular German folksongs but meanwhile let's let's start with the question we have fun I'm here what would Dickens right today that's a fairly optimistic statement hidden in this question would he write today he wanted to enter the stage during his lifetime he wanted to go into acting do you think do you really think that Dickens would turn his hand to writing today um I hope so because what I love about Dickens is what he does with the English language I don't know whether he was a good actor or whether he was not a good actor but I do know he is one of the three or four people who managed to make the English language new and change it and change its shape and absolutely delight me and that I I am a reader I am a passionate region I couldn't bear him to be making movies or television programs and if he were alive today he would have to be writing and I would be reading him I'm pretty sure that he would be on television as a presenter though yes he would he would have a very difficult life he would be he would be constantly requested to go and do readings to go and present things to go and talk to people to travel round and round America getting two hours sleep in error players and he would now as then die young from having undertaken far too much and not knowing when to stop man but with a frequent traveler card in his pocket I'm sure now not quite 20 pages into your recent novel the children's book there is a reference to David Copperfield a young boy named philip has been hiding in the South Kensington Museum and finds a temporary new home which reminds your character Tom of David Copperfield's arrival at Betsey Trotwood house now it's Dickens really that near to you as a writer today um yes he's near to me as a writer he's near to me as a reader he's near to me as a person because I was of the generation but expected to be able to read him very young I was reading him at the age of six and seven and my mother gave them to me what's more she started with Pickwick Papers which I read so that when i came to Little Women in which the characters had invented a pic Qwik club in America I knew what they were doing but and so he sort of wormed his way into my sense of the nature of things both consciously and unconsciously I I taught Victorian literature at University for quite a time I didn't teach him very much I taught George Eliot who I think should be mentioned I think she's at least as great as Dickens and I don't want people to go thank you notice you flinching when Dickens was named the greatest Victorian writer I think you have another opinion there darling no I he was as great a writer as George Eliot I think Middlemarch is arguably the greatest English novel I gather it is not much red in this country said and we are hoping for Melanie buds to change actually yeah also she she really did know germany and was deeply influenced by german literature which is probably enough about george eliot and well we might sneak her in secretly you know now you have written of course about so many real and fictitious artists of the Victorian age were you ever tempted to write about Dickens the older I get the more I don't want to write about real people I'm going to read this evening from is is in fact about the life of Tennyson but I have a sort of moral reluctance to enter the head of anybody who's real they can stand around in my story as long as I've got a head I can I have the right to imagine and the more I write them all this seems to be the case so if Dickens appeared in a novel by me he would appear it at an edge he would be a point of reference he would do something he really did and I would not say and then Dickens felt appalled or and then Dickens thought now I shall go on the train or increasingly can't do that it's quite interesting because i'm not sure why it's your code of ethics getting stricter hmm something something to do with my imaginative life really and also I get very worried by other people inventing people i know i greatly admire column toy beam but I couldn't read his novel about Henry James because I know Henry James and I don't want anybody between me and Henry James and precisely because column toy beam is such a great writer I didn't want to read it became very conflicted but some that kind of thing increasingly bothers me and I think we live in a world increasingly aware everybody feels that everybody can be written about and belongs to them this kind of sense that we're all a very cozy personal community we all know each other very well and we'll know each other even better when we read facebook and makes it quite hard to keep people separate from each other does that mean if I go home tonight and write a novel about AS Byatt you might object yes and why why because you don't know me that well we have a very good relationship because if you did know me that well it would be even worse but exactly but you can have your cake and eat oh no I mean well then it has a very good way of putting it because anybody can see it wouldn't be a good thing if you did that I'm thinking about a novel about you know you don't like me books so I'm told no I don't yeah that beeps me no option at all I have to stay a literary critic I guess if I can't write a novel about you if I can't write a novel about me you couldn't be a very great region yes I think that will stick to them now of course Charles Dickens as we've heard took a very high interest in the commercial aspect of his text he became a publisher himself we know from his autobiographical texts how deep the herd from his time was when he was forced to work in the famous shoe polish factory as a young boy how strong his fear of impoverishment we are we have heard that the fate of the American paper bag seems to be rather grim one at the moment I wonder these consideration for the writer is buy it today does it mean artistic freedom to receive high advances the money and so on or doesn't it bother you at all and change from Dickens times the money aspect of publishing I once got into terrible trouble for saying in the press that I didn't think writers should receive enormous advances much greater than what they were in I think that had something to with Martin Amos Lee did yes I mean remember something else he and I now think it was funny fortunately but um um I mean I have been quite lucky i have in the end sold enough books to learn quite a lot of money and I like it and I was absolutely terrified when I heard them saying that the American paperback was going to disappear because I immediately saw oh my god yes it is and then I shall become very impoverished and I don't understand the internet very well and I don't understand what's happened to music publishing but I get sort of glimmer of misery at the edges that we writers are going the way the musicians have gone and that our copyright can't be protected and I was once talking in the British Library and the librarians were absolutely appalled that I thought that the copyright should be protected they said they you know I was requiring a copywriting what I had written for them and they said you can't do that we are a library were free to everybody and I said I'm a writer I am i living by earning money from writing also have been chairman of the Society of authors and we looked after there I don't know I mean I am pushing her I'm very old and I made I you know before this this problem reaches critical point but on the other hand you know I'm always kind of taken aback I would gladly cheer the advent of communism but does it have to start in the arts I mean you have three daughters who would like Roli to get an inheritance well they will unless unless the Euro collapses again is another big dogs I got a big I think of my daughters in the euro and the pound and the government most nights I hope you don't then I think I better get on with the writing but speaking of inheritance is of course inheritances is the big light motif one of the three big light motifs of Charles Dickens the others to the water of course the element of water not only in the mutual friend but in all of his writing in the writing that we are going to hear and the third one is of course the presence presence scenes and all of this comes together in the first chapter of great expectations by divided you choose that fully into it I chose to read that because I distinctly remember reading the first chapter of great expectations for the first time when I was a really quite little girl i know i read it at Pontefract and i left pontefract at the age of eight or nine between the two and i remember being absolutely terrified because I was the same age as pip was when he experienced the scene on re-read it you see that what Dickens has done it's absolutely complicated because although this child is terrified he is also a very poised narrator he allows the reader to be amused but the child I was was not amused the child I was was terrified and so I have this wonderful double relationship with the first chapter of great expectations I I feel the horror of being turned wrong way up in a graveyard when I have no parents by a terrible man who comes up from behind the gravestone and I now appreciate the wit with which dickens and the elegance with which Dickens conducts the description of this scene so that the readers relationship to Dickens although it's a first-person scene is some really quite complicated and every time I read it it seems to me our greater achievement so let's enjoy the first chapter of great expectations by Charles Dickens right by AS Byatt sorry has a lot more introduction than I thought it was here we hear my father's family name being period and my Christian name Phillip my infant tongue could make of both names nothing longer or more explicit than pip so I call myself Pip and came to be called pip I give parap as my father's family name on the authority of his tombstone and my sister mrs. Joe Gargery who married the blacksmith as I never saw my father or my mother and never saw any likeness of either of them for their days were long before the days of photographs my first fancies regarding what they were like were unreasonably derived from their tombstones the shape of the letters on my father's gave me an odd idea that he was a square stout dark man with curly black hair from the character and turn of the inscription also Georgiana wife of the above I do a childish conclusion that my mother was freckled and sickly 25 little Stern lozenges each about a foot and a half long which were arranged in a neat row beside their grave and was sacred to the memory of five little brothers of mine who gave up trying to get a living exceedingly early in that Universal struggle I am indebted for a belief i religiously entertained that they had all been born on their backs with their hands in their trousers pockets and had never taken them out in this state of existence ours was the marsh country down by the river within as the river wound 20 miles of the sea my first most vivid and broad impression of the identity of things seems to me to have been gained on a memorable raw afternoon towards evening at such a time I found out for certain that this bleak place overgrown with nettles was the churchyard and that Philip period plate of this parish and also Georgiana wife of the both were dead and buried and that Alexander Bartholomew Abraham Tobias and Roger infinite infant children of the aforesaid were also dead and buried and that the dark flat wilderness beyond the churchyard intersected with dykes and mounds and gates with scattered cattle feeding on it were was the marshes and that the low ledin line beyond was the river and that the distant savage lair from which the wind was rushing was the sea and that the small bundle of shivers growing afraid of it all and beginning to cry was pip hold your noise cried a terrible voice as a man started up from among the graves at the side of the church porch keep still you little devil or I'll cut your throat a fearful man all in coarse gray with a great iron on his leg a man with no hat and with broken shoes and with an old rag tied round his head a man who had been soaked in water and smothered in mud and lame by stones and cut by flints and stung by nettles and torn by briars who limped and shivered and glared and growled and whose teeth chattered in his head as he seized me by the chin oh don't cut my throat sir I pleaded in terror pray don't do it sir tell us your name said the man quick pip sir once more said the man staring at me give it mouth pip pip sir show us where you live said the man point out the place I pointed to where our village lay on the flat ensure among the older trees and Pollard's a mile or more from the church the man after looking at me for a moment turned me upside down and emptied my pockets there was nothing in them but a piece of bread when the church came to itself he was so sudden and strong that he made it go head over heels before me and I saw the steeple under my feet when the church came to itself I say I was seated on a high tombstone trembling while he ate the bread ravenously you young dog said the man licking his lips what fat cheeks you got I believe they were fat though I wasn't that time under sized for my ears and not strong down me if I couldn't eat him said the man with a threatening shake of his head and if I had half a mind to it I earnestly expressed my hope that he wouldn't and he'll tighter to the tombstone upon which he had put me partly to keep myself upon it partly to keep myself from crying now then lookee here said the man where's your mother there sir said I he started made a short run and stopped and looked over his shoulder there sir I timidly explained also Georgiana that's my mother Oh said he coming back and is that your father along a your mother yes sir said I him too late of this parish ha he muttered them considering who do you live with supposing your kindly let to live which I hunt me about my mind up about my sister mrs. Joe Gargery wife of Joe Carter II the blacksmith sir black smithy said he and looked down at his leg after darkly looking at his leg and at me several times he came closer to my tombstone took me by both arms and tilted me back as far as he could hold me so at his eyes look most powerfully down into mine and mine look most helplessly up into his now lookee here he said the question being whether you're to be let to live you know what a file is yes sir and you know what wittles is yes sir after each question he tilted me over a little more so as to give me a greater sense of helplessness and danger you get me a file he tilted me again and you get me whittles he tilted me again you bring them both to me he tilted me again or I'll have your heart and liver out he tilted me again I was dreadfully frightened and so giddy that I clung to him with both hands and said if you would kindly please to let me keep upright sir perhaps I shouldn't be sick and perhaps I could attend more he gave me a most tremendous different role so that the church jumped over its own weathercock then he held me by the arms in an upright position on the top of the stone and went on in these fearful terms you bring me tomorrow morning early that file and then whittles you bring a lot to me at that old battery over yonder you do it and you never dare to say a word or dare to make a sign concerning your having seen such a person as me or any person some ever and you should be let to live you fail or you go from my words in any particular no matter how small it is and your heart and your liver shall be tore out roasted and eight now I ain't alone as you may think I am there's a young man hid with me in comparison with which young man I am a angel that young man hears the words I speak that young man has a secret way peculiar to himself of getting at a boy and at his heart and at his liver it is in weighing for any boy to attempt to hide himself from that young man a boy may lock his door may be warm in bed may took himself up may draw the clothes over his head may think himself comfortable and safe but that young man will softly creep and creep his way to him and tear him open I am a keeping that young man from harming of you at the present moment with great difficulty I find it very hard to hold that young man off of your inside now what do you say I said that I would get him the file and I would get in what broken bits of food I could and I would come to him at the battery early in the morning say Lord strike you dead if you don't said the man I said so and he took me down now he pursued you remember what you undertook and you remember that young man and you get home goodnight sir i faltered much of that he said glancing about him over the cold wet flat I wish there was a frog or a eel at the same time he hugged his shuddering body in both his arms clasping himself as if to hold himself together and limped towards the low church wall as I saw him go picking his way among the nettles and among the brambles that bound the green mounds he looked in my young eyes as if he were alluding the hands of the dead people stretching up cautiously out of their graves to get a twist upon his ankle and pull him in when he came to the low Church war he got over it like a man whose legs were numbed and stiff and then turned around to look for me when I saw him turning I set my face towards home and made the best use of my legs but presently I looked over my shoulder and saw him going on again towards the river still hugging himself in both arms and picking his way with his sore feet among the great stones dropped into the marshes here and there for stepping places when the rains were heavy or the tide was in the marshes were just a long black horizontal line then as I stopped to look after him and the river was just another horizontal line not nearly so broad know yet so black and the sky was just a row of long angry red lines and dense black lines intermixed on the edge of the river I could faintly make out the only two black things in all the prospect that seemed to be standing upright one of these was the beacon by which the sailors tiered like an unhooked kuske upon a pole an ugly thing when you were near it the other a jibba t with some chains hanging to it which had once held a pirate the man was limping on towards this letter as if he were the pirate come to life and come down and going back to hook himself up again it gave me a terrible turn when I thought sir and as I saw the cattle lifting their heads to gaze after him I wondered whether they thought so too I looked all round for the horrible young man and could see no signs of him but now I was frightened again and ran home without stopping Thank You Damon Tanya would you please return tomorrow and the day after tomorrow and so on and read the rest of the book to us should enjoy that very impressive I was struck by listening to you reading this first chapter of great expectations how cinematic really the description is you could turn that into a graphic novel very easily you could you could I love the bit about the horizontal lines I remember that was one of the bits that made me want to be a writer I remember thinking you can do that you can see it in that sort of abstract way you know you don't just say this guy was red and you do the horizontals and the verticals and it was some kind of analytic technical thing which is of course extraordinarily beautiful to it made me very happy and and I wanted him to end the chapter on a better note instead of which you've got the jib it which is why it remains the frightening this brings me to my favorite scene in your latest novel which is about a writer among many other characters olive and olive is saying in the german translation of Melanie violets that she has reaction aveda xiaomi leaders kunst Berg in standing near a bunch a files and constructs shafin yagan aspheric that she reacts to every work of art by the wish to create her own work of art and you have just demonstrated that when you read Dickens is that how art works for a aspired as well yes all sorts of things if I see my polka for instance something you wouldn't necessarily think the moment I see a sigma poker object I want to go home and write something it wouldn't be the same as a sigmar polke of it makes me think that this is what is worth doing I know that you have tried your hand on painting as well I can't do it but yeah my pip sort was not being able to keep up right on the stone know what I have done is at various times in my life I have bought boxes of paints and paper to paint on they're all still sitting piled up in my study because I'm so afraid of failure i can write about painting but i can't do it let me know perhaps I can but I don't try because FFF will go wrong well I remember I had a teacher at school as well who I kept writing stories and she kept saying why don't you write poems and I kept saying no no I'm doing what I have to do in the end she gave up if I'm correct you started writing fiction at the age of 13 yes I probably I was writing before that really I am it all little girls write stories I suppose but I wrote quite a long novel at the age of 13 at boarding school and I don't think it was any good and I wrote a novel with another girl about horses we wrote a chapter alternately i did think that was any good and then i wrote a quite complicated novel in absolute despair about a girl in a girls school who has entirely closed up with women who finds that one of the girls is really a boy and this was this was simply expression of my desire that some male person should appear at some point in my life and and when I left school I went down to the basement in the house I slept in where I spent a lot of my time hiding behind the stove and I ceremoniously pushed all these novels in through the front door of the furnace because and you really there weren't any good and then I went home and started another one I found one the other day that I didn't know I'd written at all I opened to drawn there was a whole novel I'm sure your publisher must be cleared with the news of their I think so I wrote it it I don't think never got ended I think I must have written it at Cambridge when you recall with what kind of of ambitions that you start writing but did you want to achieve with your writing I don't I was a very timid child I needed to write I don't know that I even really imagined i might get published I went to Cambridge as a student and I was at Cambridge at the same time as Sylvia Plath and Ted Hughes there were they you know publishing very accomplished verse I never showed anything i wrote to anyone I kept on and on writing this novel in fact I wrote two novels before I sent one to anyone to look at and there was a terrible scene once i was at a lecture on D H Lawrence and I was writing my novel sort of on the desk in front of me and the lecture actually got down from his desk and came and stood and looked at me and he said it's not as interesting as all that you don't need to take as many notes as that and I thought if to say I'm not taking notes i'm writing a novel but it was a bit like that my life Lawrence is famous for claiming that the novel is the highest form of expression of humankind isn't it oh now when i did my finals at Cambridge we had to write for our essay I think maybe it's a to RSA anyway we had a list of things that we had to we could choose to write one off and one of them was exactly that quotation and it seemed to me that it was incredibly arrogant of Lawrence to say that and extremely wrong and what about mathematics and Plato and well there wasn't neuroscience then but there was other forms of science and historians and people and so I got really angry and I knew I was killing myself as an examinee because I knew the question have been set by dr. fr leavis who believed Lawrence about the novel being the highest form of human entertain and also believe that novel Lawrence was the highest form of novelist yet to write and so I was damning myself with every word I wrote nevertheless you can't go about saying things like that I life is much more various than novels I'm glad to hear that I was struck by the elements of fairy tales in the first chapter of great expectations that we have heard here thinking about the the liver the heart that is torn out of the body and roasted and eaten that could come straight out of a brother Grimm fairy tale and fairy tales by a very important role for the latest book because we have a fairy tale or a writer it was olive there I wonder what kind of impressions did fairy tales make on a aspired the writer but when i was when i was a child i didn't like books about children i like the fairy stories but I didn't think I didn't remember that for myself Angela Carter said in the 1970s that she had a sort of moment of Revelation when she realized that it wasn't realist novel she'd liked as a girl it was fairy tales and imaginary worlds and this actually did something and mr. something very important to me it caused me to think I can do that and I should do that and I should write fairy tales and I should I should write sort of grown up fairy tales and then I sort of got into writing possession and writing Victorian poetry and I think both I mean in a completely different way from Angela Carter I felt an impatience with modern realism at that time I I now find I can go on writing a realist novel as long as i intrude into it other forms of expression just to show you not the highest form of expression yet attained necessarily wish you do of course are not only by entering fairy tales in the children's book but also by making a novel in a kind of a cystic endeavor by writing a cultural history at the same time well um I think I think you know human beings think and once you start thinking about thinking as well as sex and love and friendship you have to put thinking into the novel to make the whole world of the novel more like the real world Ian Forrester wrote a list of things he said the novel had in it birth death sleep love food he didn't say it had thinking and he didn't say it had work now I live in a world in which both thinking and worker is as important to me as birth death etc etc were probably not as death but you don't notice death once it's happened to you and so I found that I found I didn't like my characters if they didn't think I mean no not all of them think but they didn't seem to me like real human beings if they didn't think and I put the essays in to connect the thinking of the people to the world they're in but I do they're not as it were they are part of the text of the novel the whole thing is written from beginning to end and say metaphors runs through any history I might put in or indeed any quotations I might put in as through what I'm watching myself it's all it seems to me it's one aesthetic oh you give us in children books a tremendous wealth of information well researched bits for example I got a spark in my mind is the image of Kaiser Wilhelm sitting here in Berlin no and puts them on on a chair which is constructed for him out of riding saddle dashing away totally mad letters to his English cousin writing about the chaste ease of the terminal I and so on how did you research for that um I Reeve Airy fast so I read several lives of Kaiser Wilhelm and I read several helped by Melanie votes I made a great deal about the history of Munich and some of you novelist said in Munich one has to say yes yes and it wouldn't have existed without Melanie really but quite a long one of the themes of the novel's novel is puppetry there's a people make puppets and I found what was it cool there was a nightclub in Munchen in which a puppet play was put on called ina fine if Amelia and it was about the crowned heads of Europe as little children fighting so it was a children hmm el sheriff de which is absolutely wonderful I nearly fell over when I found and it only went on for about six weeks didn't it I had imagined this elf chef Ricky of being a sort of cultural center in Munich for years and years but it was put down though the authorities but all these people very German may marched in in their executioner's hoods with their acts as over their heads and enacted thing but it connected one theme to the other because they came to the conclusion indeed that the quarrels between the royal families were probably the causes of the First World War as much as anything else so it is a children's book you the sad irony you give us an enormous wealth of information there there's the women's movement there was the Fabian Society and so on and writing historical novels there's always the danger of in a certain inevitable pneus of one event leading to the other SP know it from the history book but you kind of make these historical events fluid again and I came away from the children's book with the impression they are really didn't there was not a necessity for the First World War no I keep reading essays on the causes of the First World War and I'm not a historian I really am NOT this is the only piece of serious history I've had anything to do with I would call you an aunty historian I am an aunty historian and where was it the more you look for the necessity the more there wasn't any and all sorts of things one didn't know like the Kaiser supporting queen victoria in her bed when she was dying I mean I found that out I don't normally read lots of royal biographies and when the coffin on the gun carriage started coming off it was the Kaiser who got off his weight Orson pushed it all back on and told them how to rihanna's the horses and that it's there's little things that somehow change your image of the big things I mean he was obviously completely mad and when my husband told me this story of fun moltke having sent out was it 1100 trains for the soldiers to fight the war and the Kaiser comes in waving a telegram saying it's all right it's all right we don't have to have this war mocha goes into his study and decides that his employer is mad but that kind of thing I never read when I was reading history books when I was a girl it was you know the causes of the war were this and this and this and the build-up of the Navy and and then you find little things like here is baden-powell who founded the scouts movement and healthy camping for everybody it turned out that his great passion in life was witnessing executions and he would travel all over the place all over Europe all over from sea to sea in order to be there at an execution now what does that tell you that is by the way something he shared with Charles Dickens who who spent a great amount of money securing first row places and waiting from midnight on to witness an execution in London yeah make you think now but let us enter the children's book and let us enter the world of one of your leading characters who is a Potter as Caesar you have something for for the name of potter or the profession of pottery obviously is after fredericka you have now a profession Potter well Frederick Frederick a Potter was called fredericka Potter because i was reading Karl Popper and I wondered if you could get away with calling a character in an English novel papa and he decided you couldn't so I moved sideways but of course everything has its unconscious motivation and I am descended from the Potters of stoke-on-trent of the five towns I Phillips ancestry is my own ancestry they were the workers in the potteries and should i do that philip has run away from the potteries where he lived in a large working-class family on the edge of destitution carrying saga's because he wants to make a proper work of art he wants to make a proper pot and he is discovered by heavily of the writer and other children in the Victoria and Albert Museum drawing things and because he's fallen in with the right people who are Fabian an arts and crafts people he gets taken to stay with a Potter in fact down on the same marshes as pip was on and when they just put they did a film on the television in England recently of great expectations it opened in the church in which this scene in my novel is now takes place so we've got two pips and in the same church more or less I think there was a dissertation of PhD theses in there um he's actually called after philip hensher and i just thought it was at who do i like who has a good name anyway um I think it's self-explanatory he's going for a solitary walk on the marshes Philip had not been included in the party and not and had not expected to be he had taken some bread and cheese and set out in the strangely unseasonal weather on a long ramble he walked to his favorite last Church the diminutive brick-built church of san thomas becket near fairfield Philip thought of this church is his own particular church he knew little about Thomas Becket and did not know that the church was built on Beckett lands he had never seen a church so isolated it stood amongst water meadow as stretching flat and far on which four miles the fat sheep busily cropped the salty grass there was no road leading to it and from it no village no high road could be seen only the marshes and the weather the marshes often flooded in the winter and then the church appeared to float mysteriously on sheets of floodwater reflected in the dark bright surface on calm days blustered and beaten by howling winds and spray on stormy ones Philip made his way from tough to tuft of the marsh grass for it was sodden underfoot and water welled up between tussocks when he got to the church he looked around at the endless sky the flat horizon the apparently endless sheep studded meadows and felt peaceful he didn't think exactly in language he noticed things the dabbing movement of a duck the awkwardly beautiful almost crippled look of the trailing legs of a flapping heron fish squirming in mud patterns made by the wind he SAT for a long time on a stone in the churchyard not even thinking time was so slow there was no reason ever to stand up or to move on a figure appeared on the Fairfield path at the limit of vision a woman in silhouette in a skirt with her hair bound in a scarf and what looked like a small suitcase in her hand she stopped to lean on a gate and then walked a little way and then sank to the ground like a kind of hummock and stayed down Phillip stood up and set off across the marsh feeling this other person who now shared the emptiness with him was both an intruder and perhaps in need of help it took him some time to reach her during his striding leaping occasionally bogged approach she did not stir she appeared to a faint it or died she had crumpled quite compact her body in a ball her face on her outstretched hand the carpet cardboard suitcase on the wet dust within reach Phillip knelt down he did not want her to be dead he took her shoulder and turned her face slightly towards him the face was grimy the lips slightly cracked the eyes closed her nostrils and lips trembled she was breathing a breeze tucked at the edges of her gypsy scoff which was more animated than she was she was wearing a felted coat bunched over a gray skirt her ankles were swollen and her shoes cracked and dusty she had walked a long way Phillip squatted beside her amongst the wayside grass and took her hand which seemed the Pilatus thing to do he bent over and said in her ear gently can I help and then how do you feel she trembled a little and stirred and opened her eyes briefly staring out past Phillip secluded head at the sunlight what she said however was his name Phillip Warren Phillip stiffened I'm looking for Phillip Warren she said I keep getting lost Phillip pushed back the scarf and the hair from her face rearranged her features in his mind's eye and so she was his sister Elsie Elsie a year older than Philip was the sister he loved had found it hardest to leave he said Elsie it's me I am Philip I can't see your face because of the Sun I got lost I walked and walked and walked and there were no people or places what are you doing out here Philip felt briefly very annoyed what are you doing is the question can you sit up he pulled out her no longer with respect but with the intimacy of family she sat up and smooth her skirts stretching her horrible feet in front of her she had always been as far as was possible fastidious about her person and clothing mum died said Elsie I came to tell you no one wrote you don't put any addresses on your postcards do you probably you don't want to be bothered but I thought you ought to know mum died aunt Jessie took the others except Nellie who's gone into service I didn't think I could last I didn't think I could see the year out in a house with aunt Jessie what did she die off lead poisoning that's what was always coming and it came she asked for you a lot she wanted me to give you her brushes and the Minton cup and I've got them in that suitcase I said I'd find you she knew I couldn't abide to be with auntie Jesse and I have found you they're not worried have expected she spoke with a kind of determined vermin's her voice thick with dust and dirt she said you ought not to have she began suddenly to weep hot little tears bursting out through her eyelids spattering on her gray cheeks Phillip was partly try and partly refusing to think about his mother he half saw her thin and stooping and crossly shut the picture out LC heard the next question the postcards said romney marsh and wind shall see I walked to Winchelsea and someone said if I was looking for Potter's there was a madman out at somewhere called purchase so I set off walking there and got lost as you see you better come back with me can you walk I was walking and you fell over I saw you can you get on your feet I shall have to I have finished reading this novel last week naked in a German Sona because I couldn't import it with my copy for even for 15 minutes I wanted to get the end it's a brilliant novel thank you very much we want to showcase another piece of your writing tonight at the end which brings us back to Dickens in a certain way because both Dickens and Kafka are they are the similarities between the two of them have been very often mentioned and it's hard to resist for example to compare the circum location office in the worid to cough cash loss and Kafka himself spoke of his early writing attempts explosive Dickens na among simple Dickens imitation but I am tempted to mention another Kafka formulation name from namely very famously spoke of that writing is surrounded by voices and ghosts and your writing is of course surrounded by voices and especially ghosts what fascinates you with ghosts is bad I don't believe in them but I am sort of haunted by voices in fact it's very painful lines from things run into your head and you can't remember what they are and you walk through the street and you can only get half the line and you feel dreadful and you don't know what person is attached to this wandering voice and now i have a German publisher with an iPad and I say to him can you find this and he patiently puts it into the iPad and then I have the answer and I think I may annoy him in the end but we did look something up for this reading I mentioned that it must be answered in polymer surface fish are you pan shogun partners is not to be annoyed I'm absolutely positive about this now Dickens himself at this his time was very fond of another writer that you focus this novella about the controversial angel Alfred Lord Tennyson why what what it Dickens fascinated about Tennyson I'm not quite sure I think I think Tennyson just sang in everybody's head I mean Tennyson was as popular as Dickens was and people I mean Tennyson's funeral was a sort of huge public occasion that no writers funeral would now be and he spoke to everybody really Tennyson did in those days I don't know particularly what it was I'm Claire Tomalin who is here does know why Dickens had Madame Tennyson so emotionless it's from reading her book that I have discovered this information beginning of our evening you have told us that you would hesitate to write so much about a real historical character today yes this would in fact this is this is a true story um Tennyson's friend Arthur Henry Hallam died quite suddenly in his early twenties in Budapest I mean in one half of it I can't remember which off and he had become engaged two tennysons younger sister Emily and the tennyson family received this terrible news and was shattered and Emily went to bed for a year and mourned and Tennyson after some time began to write what I think is one of the greatest poems in the English language in memoriam to which he kept in a sort of kind of strange closed stanza took him 17 years to finish it and he kept writing it in writing it was clearly the nature of his life and the morning for Arthur Henry Hallam became Tennyson's morning although it was Emily who was in a sense the widow and after about 10 years Emily quite suddenly decided to marry somebody call captain Jesse and she had been living on the pension from Hallam's father and the family didn't like Captain jesse and elizabeth barrett browning said that it was absolutely shocking that you should ever think of marrying you know after having lost Arthur Henry Adam when she was only 19 years old put woman anyway they they they had a sort of marriage which went on and in later life they were living in Margot and they had joined some spiritualists and Swedenborg ins and they were holding a series of séances and it was said that they were trying to raise the spirit of Arthur Henry Hallam and in Swedenborg in religion everybody has one eternal partner and it doesn't have to be the person you're married to it had that there is only what a slug in a long time just as yes and when you find your eternal partner you are joined together in one body and you become one being which is known as a conjugal angel which is why this story is called the connubial angel anyway I was working my way through the footnotes to the letters of Arthur Henry hello and I came upon the account of this sales and what Emily Tennyson's they did finally raised a ghost which i will read about I invented both the mediums who were called mrs. pepper gay and selfish shiki because I needed someone with whom I could be free whose mind i went into what emily says when the ghost arrives is out of the footnotes to the letters of Arthur Henry hella which is an exact record of what she said and I was very moved by so let's operate the famous AS Byatt telescope and bring us to the seals they start doing automatic writing mrs. pepper gay proposed that they try automatic writing she drew the paper towards herself she did not wish to impose on Sophie she waited after moment the pen wrote confidently blessed are they that mourn for they shall be comforted is anyone there mr. Hawke inquired any message for any particular person present he will not come she said who will not come said mr. hawk Oh said mrs. jessie with a little sigh it means Arthur I am sure the pen wrote rapidly and he that shuts out love in turn shall be shut out from love and on her threshold lie howling in outer darkness the pen appeared to like this word for it played with it repeating it several times howling howling howling and then adding those that lawless and in certain thoughts imagine howling tis too horrible a poetic spirit said mr. hawk the first to Alfred said mrs. jessie the pen may have hooked them so to speak out of my mind the last is from measure for measure a passage about the fate of the soul after death which alfred was much struck by as we all were I have no idea who is uttering these things one shall be comforted all tears shall be wiped away the bridegroom cometh he know neither the day nor the hour when he cometh light the lamp who is telling us these things asked mrs. jessie no oh no said sofa shiki in a strangled voice Sophie cried mrs. pepper gay Sophie felt cold hands at her neck cold fingers on her warm lips the flesh crept over the bones of her skull along the backs of her fingers under the whale bone she began to shake and jerk she fell back open mouths in her chair and saw something someone standing in the bay of the window it was larger than life and more exiguous a kind of pillar of smoke or fire or cloud in a not exactly human form it was not the dead young man for whom she had felt such pity it was a living creature with three wings or hanging loosely on one side of it on that side of the wing each side it was dull gold and had the face of a bird of prey dignified golden-eyed feather breasted powdered with hot metallic particles on its other side turned into the shadow it was gray like wet clay and formless putting out stumps that were not arms moving what was not a mouth in a thin whisper it spoken two voices one musical one a paper is squeak tell her I wait tell whom said Sophie in a small voice they all heard Amelia I triumph inconclusive bliss tell her we shall be joined and made one angel it was hungry for the life of the living creatures in the room Sophie said mrs. pepper gay what do you see gold wings said Sophie it says I wait it says to tell you I triumph inconclusive bliss it says to tell Emily Mrs Jesse Emily that they shall be joined and made one angel in the hereafter that is Emily Jessie gave a great sigh she let those sofas cold hand a detached sofas other hand from her husband's breaking the circle Sophie lay inert like a prisoner before an Inquisitor staring at the half angel whom no one else saw or really felt the presence of and Emily Jessie put her hand into her husband's well Richard she said we may not always have got on together as well as we should and our marriage may not have been a success but I consider that an extremely unfair arrangement and shall have nothing to do with it we have been through bad times in this world and i consider it only decent to share our good times presuming we have them in the next Richard picked up her hand and looked at it why Emily he said and then again why Emily you are not usually at a loss for words said his wife no I am NOT it is only that I understood I understood you to be waiting for for some such communication I never suppose you would say anything like what you have just said it may be that you have other ideas said Mrs Jesse you know that is not so I have tried to be understanding i have tried to be patient i have respected to well to well you tried to well we both kept in Jesse took his head like a surfacing swimmer but all through these séances I understood you to be waiting I do love him said Emily it is hard to love the dead it is hard to love the dead enough mrs. pepper gay was made intensely happy by this exchange who would have thought it she said to herself and yet how right it was only when the angel threatened her with the loss of the husband she had taken for granted that she really saw him saw him in terms of his loss is vanishing that was implied and was driven to imagine existence without him um he cut a little bit and mrs. pepper gay then does some more writing the séances i think i will save before i read this the séances were then broken up because the the automatic writing began to produce very obscene poetry and they had to throw it in the half and and desist she drew the paper absently towards herself and took up the pencil which squirmed gleefully in her hand and set off possessed across the paper in a phonetically neat unhesitating Hamed is the conjugal angel stone that here he stands with heavy head the backward-looking pillared dead inert moss-covered all alone the Holy Ghost trawls in the void with fleshly Sophie on his hook the sons of God crowd round to look at plumpy limbs to be enjoyed the greater man casts out the line with dangling Sophie as the lure whoo howls around the heavens color to clasp the human form divine rose petals fall from fallen hair that in the clay is redolent of liquid oozing and the scent of the dark pit the beastly lair and is my love become the Beast that was and is not and yetis who stretches scarlet holes to kiss and clasps with claws the fleshly feast sweet Rosamund adulteress Rose may lie inside her urn and stink while Alfred's tears turn into ink and drop into her kelco shows the angel spreads his golden wings and razors high his golden and man and wife together lock into one corpse that moans and sings stop said mr. hawk there is an evil spirit present these are filthy imaginings which must be put an end to turn up the lights stop mrs. pepper gavel must be strong aroused by his angry voice Aaron saddled across the table knocked over the Rose Bowl and took wing to the mantelshelf leaving behind him a dark stain covered with white rounds I should perhaps have said that Emily Tennyson kept both a raven and a pug as pets there was a real raven what can it mean said mrs. Hearn Shaw reading what can it mean it is obscene said mr. hawk it is not fit for the eyes of ladies I believe it is their communication of an evil spirit to which we should give no more hearing Aaron left let out a loud perhaps affirmative croak at this which made them all jump and puck slipped shifting in his sleep let out a series of popping little thoughts and a rich decaying smell Emily Jessie her pinched and white took up the offending paper and carried it over to the fire into which she dropped it it curled and crisped browned and blackened and flew on ashen wings up the chimney mrs. pepper gay watching mrs. jesse knew that this was their last séance in this house that something truly remarkable had happened and precisely for that reason no more attempts would be made she was sorry and she was not sorry mrs. Jesse made tea for mrs. pepper gay and Sophie and said she had decided it would be wiser to have no more meetings for the present something is playing games with much that is sacred to me and it is not myself mrs. pepper gay but it can be no one else and I find I do not wish to know any more do you think I lack courage I think you were wise Mrs Jesse I think you were very wise you can sell me she poured tea the oil lamps cast a warm light on the tea tray the teapot was China with little roses painted all over it crimson and blush pink and celestial blue and the cups were garlanded with the same flowers there were sugared biscuits each with a flower made out of pipe dicing creamy violet snow white sofa shiki watched the stream of topaz colored liquid for from the spout steaming and aromatic this too was a miracle that gold skinned persons in China and bronzed skin persons in India should gather leaves which should come across the seas safely in white wing ships encased in lead encased in wood surviving storms and whirlwinds sailing on and a hot Sun and cold moon and come here and be poured from bone china made from fine clay molded by clever fingers in the pottery towns baked in kilns glazed with slippery shiny clay baked again painted with rose burbs by artists hands holding fine fine brushes delicately turning the potter's wheel and implanting kiss of sable has floating buds on an AJA ground or a dead white ground and that sugar should be fetched from where black men and women slaved and died terribly to make these delicate flowers that melted on the tongue like the scrolls in the mouth of the prophet Isaiah that flowers should be milled and milk shaken into butter and both work together into these momentary delights baked in mrs. Jesse's oven and piled elegantly onto a plate to be offered to Captain Jesse with his wool white head and smiling eyes to mrs. pepper gay flushed and agitated to her sick sick self and the Blackbird and the dribbling pug in front of the hot coals of fire in the benign light lamp light any of them might so easily not have been there to drink the tea or eat the sweet meats storms and ice floes might have taken captain Jessie grief or childbearing might have destroyed his wife mrs. Papa gay might have lapsed into penury and she herself have died as an overworked servant but here they were and their eyes were bright and their tongues tasted goodness inspired thank you very much such is the power of great literature that this last passage almost almost filled me with a wish for having a cup of tea but then I remembered seamus heaney i think it was who were saying the two things do not exist in this world a whiskey too large and the reading too short so maybe we put an end to it here thank you so much AS Byatt things also the good people at the British Council and that the bedlam and representation here ended in lindon for making this possible and thank you our audience for being so patient and attentive tonight thank you interested goodnight I don't read that very often that's that poem that permits in the rhythm of in memoria |
english_literature_lectures | Charles_Dickens_and_Popular_Culture_Professor_Michael_Slater_talks_to_Jonathan_Harrison.txt | to accompany senate-house library's current exhibition on Charles Dickens and popular culture professor Michael Slater discusses some of the items within the display is literally autobiographical and it's the description of David when he's particularly unhappy after these terrible mr. murdstone stepfather is coming to the household treaty killed and so forth and his great consolation he's reading these books that belong to his dead father it so happens that John Dickens Dickinson's father did have a collection which didn't go to the pawn shop or was able to Rita and he transfers this experience directly to David Copperfield in in this passage from chapter 4 it is he says my father had left a small collection of books in a little room our stairs to the jogger access which nobody else in our house would have traveled from that blessed room Roderick random Peregrine pickles and free tinker Tom Jones the bitter of Wingfield Don Quixote Niebla and Robinson Crusoe came out that glorious house to keep me company they kept alive my fancy and white who was something beyond that place of time they and the Arabian Nights and the tales of the genie and didn't mean oh c'mon for whatever harm was in some of them was not there for me I knew nothing are we it's astonishing to me now how I found time in the midst of my poems of ducklings over heavier themes to read those books as I did and he goes on to talk about how he put this true mr. into all about himself into all the Eroica parts and so on but it's an interesting range of books there isn't it he makes several references to the tales of the GE in his fiction but the most dramatic is the reference to miss LA after crucial point in both expectations just the fourth of reappearances language when pip learns the true source of his wealth which is not coming to him from Miss Havisham as he was fond he supposed but from this horrible creature from his childhood this terrible convict language who is about to appear to you but just before half that chapter at the end of the last chapter Dickens uses the Mishnah story to great affair children read the in the eastern story the heavy slab that was to fall on the bed of state in the flush of conquest was slowly wrought out of the quarry the tunnel for the room to hold it in his place was slowly carried through the leaves of rock the slab was slowly raised and fitted in the roof the rope was roved to it and slowly taken through the miles of hollow to the green high and ring all being made ready with much labour and the hour come the sortin was aroused in the dead of night and the sharpened axe that was to sever the road from the great iron ring was put into his hand and he struck with it and the rover parted and rushed away undeceiving fed so in my case all the work near and afar that tended to the air had been accomplished and in an instant the blow was struck and the roof of my stronghold drop it upon me so that's the most dramatic use as I say of the tales of eugenie but that they're always cropping up so so Dickens recalls his childhood reading also in the Christmas Carol aware he specifically is talking about the Arabian Nights and Robinson Crusoe which I think was no I think his absolute favorite novel was The Vicar of Wakefield but Robinson Crusoe came very close to it and in a very significant passage in the the Christmas camera when Scrooge is taken back to his past the ghost of Christmas past and he sees this pole for the wrong boy in this desolate schoolroom who hasn't been taken home for the holidays and so on and it's a kind of image of Dickens the desolation of his childhood just before the blacking factory period and so on and and how he was consoled by reading these and his imagination kept alive as he said is so important to him by reading the Arabian Nights etc and this is depicted in in Scrooge as as a child he sees himself sitting there for lawn in the schoolroom and then suddenly this wonderful glamorous creature looks in through the window which is Ali Baba why it's Ali Baba Scrooge exclaimed in ecstasy it's deer road honest Ali Baba yes yes I know one Christmas time when yonder solitary child was left here all alone he did come for the first time just like that poor boy and Valentine said Scrooge and his wild brother Olsen there they go and what's-his-name who was put down in his drawers asleep at the gates of Damascus don't you see him and the Sultan's groomed turned upside down by the genie there he is upon his head serve him right I'm glad of it what business had e2b married to the princess to hear Scrooge expending all the earnestness of his nature on such subjects in the most extraordinary voice between laughing and crying and to see his heightened and excited face would have been a surprise indeed to his business friends in the city there as the parents cried Scrooge green body and yellow tail was bursting like a debtor's growing out to the top of his head there he is poor Robin Crusoe he called him when he came home again after sailing round the island poor Robin Crusoe where have you been Robin Crusoe the man told he was dreaming but he wasn't it was the parents you know and there goes Friday running for his life to the little creak hello hello then with a rapidity of transition very foreign day as usual character he said in pity for his former self oh boy and cried again and another very important book for Dickens from his childhood reading was the massage is the water it's called in in French this was a early 18th century novel by French novelist massage translated as the limping devil and it's a story about this young man T of us who is encounters this but he actually releases him from a bottle I think there's this lady who as a reward shows him takes all the rooftops of the houses in Madrid and shows him all the wicked going as on what people are really like it's a satirical model basically you have this wonderful episode of the demon pulling the roofs off the houses to show people up to all sorts of nefarious activity inside and he makes a very Dickens makes a very striking reference to this in Dombey and son when he's talking about the complete blindness of this the moral blindness of mr. Davi to his situation his pride is completely sealing him off from understanding what's going on around him and Dickens writes that dom vivre is like many people who don't really see what's happening random and then he says referring to the visage the novel over a good spirit who would take the house tops off with a more potent and benignant hand than the lame demon in the tail and show a Christian people what dark shapes issued from amidst their homes to swell the retinue of the destroying angel as he moves forth among them so in addition to canal the chickens also mentions seeing some Shakespearean productions it's a job that's at the local theater in Rochester where he he wrote a wonderful essay later on for his journal all the around the series called the end commercial traveler an essay called done for a time which is actually about Rochester and his memories of a passage as a child and he remembers particularly being taken to the theatre the small theatre in Rochester it was within those walls I had learned as from a page of English history how that we keep Keaton as Regina slept in wartime on a sofa much too short for him and how fearfully his conscience troubled his boots and he goes on to say many wondrous secrets of nature have I come to the knowledge of in that sanctuary in that little theatre of which not least terrific were that the witches in Macbeth bore an awful resemblance to the veins and other proper inhabitants in Scotland and that the good King Duncan couldn't rest in his grave but was constantly coming out of it and calling himself somebody else about the exigencies of these small struggling companies where people had to double up the baths and something to the trouble distant wonderful that Duncan's killed then he comes back as somebody else and and Dickens always have that child's eye view as it were of seeing both what was really there and what we all pretend this there that's the essence really of his art I think he just loved anything and everything to do and towards the end of his life been very near the end of his life he said to a friend of his that his ideal existence would be to be the manager of a great theatre with a wonderful company and he should have perhaps he should have absolute control of it he should decide what was put on and how it was staged and everything that would be his absolute ideal existence nozzles are very novelist but that's as the manager of a great theatre |
english_literature_lectures | Jonathans_Literature_Lecture.txt | hi my name is jonathan friedberg and today i would like to speak with you about the english modernist period of literature this period ran roughly from 1901 to 1950 or the first half of the 20th century now ask yourself how do you get a real emotional understanding of another time or another place in the world i mean a real emotional connection to the events of that time and place this is why this particular literary period is important for you to know about it's important to understand what happened in the past and especially what is presently going on in the world around you not just the facts and figures as you get from a news report or the newspaper but real life emotions as well the writers of this time period introduced brilliant insight into the world around them and they were able to convey the emotions and the fears and the joys of a very believable fictional reality to explain this to you i'm going to touch on three main points first we're going to talk about how the writers of the english modernist period used fictional narratives to give their readers a look into the world of an alternate political reality they use this this technique to warn their readers about the risks of falling prey to political forces that were spreading throughout the world at the time next my second point we'll discuss how the events of this time period influence the thinking and the writing of this era finally my third point we're going to take a look at a literary hero created in this period he's winston smith of george orwell's 1949 novel 1984. first to summarize the literary period the authors of the modernist period used their writing to express warnings and concerns at a frightening new world order they used fiction and poetry to tell realistic tales of a not-too-distant future in which very human individuality that they prized had been usurped by technology war and authoritarian political dictatorships their writing brought their readers into this new world through their fiction this allowed them to feel the fears and the sufferings that this dystopian future held the readers they could connect to the characters and experience these fears and the feelings right along with the characters it was a brilliant new technique in a very personal way of writing so next let's move on to the second point which is the historical period itself this explains why the reader writers the authors did what they did the modern period first half of the 20th century was a time of great political social and economic change the writers of this time period reflected the change in their rejection of styles of the past their glory of war the beauty of science fiction technology like uh not orson welles what the to the moon and back jules vern that sort of writing um they adopted new forms of writing and poetry and prose more realistic styles of writing their work was influenced greatly by the events of the day there was rapid industrialization the loss of rural life as people moved from their farms to cities to work in these huge impersonal the advent of technology was now intruding into their lives with electric lights and telephones and television and most significantly they were dealing with the effects of these two disastrous world wars and the political upheaval and new realities that followed to truly understand the writing of this period you have to consider the changes that these authors witnessed really in their very own lives they were born at a time when most people lived on farms transportation was by plotting horse or maybe a train communication would take weeks by mail snail mail as we call it today lighting was gas lamp or candle but then by 1945 the world had changed to such a point where many people lived in cities they worked in these big impersonal factories and they could travel by car or plane getting across an ocean and hours or around the world in days communications instantaneous where most homes had telephones and radios and television was becoming more commonplace than bread and butter practically yet mankind was even poised on the edge of space and yet at the same time through all these great advances the soviet union in particular was a great world power which denied not only the existence of god but also that of the very individual technology was making life more hectic and seemingly more dangerous as it allowed the soviet union and communists to facilitate the rewriting of history through mass media is never before the literary literature of this time really reflect reflects the fears and the frantic pace that these authors faced in their own lives lastly let me move on to my third point which is about winston smith the somewhat inconsistent hero of george orwell's novel 1984. smith's an ordinary man uh he's working to both serve this large impersonal state apparatus while at the same time struggling to maintain a sense of his own individuality he um this novel paints a wonderful picture and kind of disheartening picture of his struggles and it indirectly debates the benefits of being an individual versus the benefits of sacrificing oneself for the common good of a large impersonal society smith works in what's called the ministry of truth where he is tasked with rewriting history in the form of altering documents and photographs so that they reflect his party current version of events they've always been at war with east asia but then all of a sudden now they're they've always been allies with east asia and fighting against eurasia and back and forth and no one really knows what the truth really is anymore um smith also commits the very individual crime of falling in love for this he's arrested and he's tortured in the ironically named ministry of love where he eventually rejects his love interest and he recommits himself to the party and the state he even goes so far as to admit that two plus two now equals five this really illustrates the absurdity of the power totalitarian government can have in controlling not only the actions of its people but their very thoughts so in conclusion the objective of this presentation was to demonstrate how during the english modern era of literature authors often used fiction as a means to convey a true understanding of another real life situation in the case of 1984 i showed you how george orwell painted a grim portrait of the power of the soviet state and the risks of letting down our guard against the perceived communist expansion of the times this novel through the eyes of winston smith his hero illustrates the fear and the uncertainty around this new world order in the post-world war ii era and to recap i've covered three main points first how authors of this period used fiction use fictional narratives as a means to convey and express their ideas in ways that mere essays and news reports could not uh secondly we talked about the events of the time period during the first half of the 20th century and how they influenced the thinking and the writing and the style of this era and finally we discussed how george orwell used his novel's protagonist winston smith as the vehicle with which to convey his message of warning against the threats of totality totalitarian state posed for the individual individuality of its citizens i hope you found this presentation interesting and thank you |
english_literature_lectures | York_Debtors_Prison.txt | 26th of August 1842 Joseph beanland A deta last night attempted to escape disguised himself having a mustache and a light colored coat he pretended it was only a Gest but if it repeats such a Gest I shall apply to the magistrates to have him confined to the Felon side of the jail I have ordered him into solitary confinement 3 [Music] days York Castle Museum today attracts hundreds of thousands of visitors every year right through from local people to tourists visiting from all over the world the building is a former prison which hous not only felons but also people who found themselves in debt between the early 1730s and the 1880s the top two floors were known as the dett's prison and people sent here stayed until their debt was paid however long that took so what was it like to be a DEA here and as a society why do we have debt we are seven York St John's ba honors history students and we've spent months trying to answer just that here's a brief look at some of what we found [Music] well we're sat at the entrance in a way the first room of the first floor which really makes up the very entrance to the deta prison of what is a three story u-shaped building the ground floor being principally for felons those who committed a crime actually debtors were put in prison because they couldn't pay their creditors at the behest of their creditors more than the law itself and this is the room that we're now sitting in where the Jailer would have actually welcomed people in their in a prison way H and then taking them off to their cells to to spend their time but of course it was it was a costly business being in prison so you paid to come through the front door you paid for absolutely everything and there was a scale of charges the two flaws that made up the dett's prison are a really interesting set of spaces they're very uniform they're very box-like but they're very plush that you wouldn't recognize them as perhaps cells as we think of them and they're arranged on a two two floors of of a u-shape so with this sort of u-shaped rors with access to exercise yards so they could get out and feel some fresh air but detas weren't as constricted as felons so actually they could use a lot more of the space they could even wander into town some of them at certain times during the history we call the whole building the deta prison but the really the two floors which now look like offices and museum display spaces represent the deta prison a deta was someone who up until 1869 in Britain could be imprisoned for owing someone money there was a specific definition for being a data and one for being a bankrupt you could only claim bankruptcy if you were a Trader who owned more than100 if you were not a Trader and you owed any sum of money or if you were a Trader and you owed less than £100 then you could be imprisoned for debt well debt was absolutely no respector of persons at all virtually anyone could end up in prison for debt you might a gentleman farmer who'd had a bad Harvest couldn't pay your suppliers you could be a Publican and your beer went off and suddenly you couldn't meet your your bills so it could be very smart people you had Knights of the realm who were imprisoned for debt and it could be very poor laboring people who just couldn't make their ends meet 10th of December 1843 George and William brommet still refused us to remove into one of the rooms appropriated to the third class to which class they belong they also refused to go to Chapel this day Sunday I have therefore given them another day solitary the three main reasons why people have debts and they haven't changed over the centuries and the three reasons are impatience bad luck and optimism so imp patience what that means is that people they don't want to wait until they've saved up enough money to buy something they want to buy it now so they borrow in the expectation of being able to pay back in the future of course that's where the problems occur when you don't generate enough income in the future to pay back but normally that's a good thing and um it's a there's been a big change in the way Britain has worked in the last few years with greater opportunities to borrow to do that so that's the the good side the um the bad luck is when you suffer a loss of income you become unemployed and you want to maintain your levels of consumption so you borrow to do this and this not only applies to people but it applies to government so we see in a recession for example that governments borrow heavily in order to be able to maintain their levels of expenditure and the third reason optimism that's because we have some idea for the future uh we want to invest so we borrow money and we hope to be able to repay it in the future but this have good outcomes and it can have bad outcomes so what was the relationship between life inside and outside the prison for detas and how did thatas find being in such close proximity to felons in some ways life inside the prison was very much like life outside the prison it replicates the structures of British Society quite a lot it's very much a strong class hierarchy so of the two flaw floors of York Castle jail that were used for the detor prison the first floor which had the higher windows and the bigger rooms that was for smart respectable dettas that was called the um Master's side and they lived on the same floor as the governor and the um Chapel was on the first floor as well the poor detas who were classified as really really poor and who didn't have titles who weren't Esquire or who weren't sir they lived on the top floor which was smaller rooms smaller windows and it was very much more crowded up there and the Master's side of the detas they sort of made the rules for how life worked inside the dea's prison so in some ways it's not so dissimilar to life outside the Prison Walls well the views of the detas would have been lovely I think and until 1835 until they built the new prison when you had this literally this wormwood scrubs of gritstone dropped in you saw a beautiful view much as we see today really from the buildings honest with you H we see the Foss at the back the Rivero Clifford's Tower actually it's a lovely View and I don't know I don't know whether that's good or bad whether that inspires you to think I want to get out of this place or whether it think gosh I wish I was out there and I want to escape also the windows that were provided in the cells are the penetration is is really nice it's quite they're quite Grand they're quite large they're quite light and Airy spaces so you were locked up you were denied your Liberty to a degree agree but you had nice views at the same time um so I think there's a real interesting psychological dilemma there in your head the detas were in close personal contact or could be with the felons the felons used to exercise in the front Courtyard of the building there was a fence across the two wings and they had this small courtyard the detas on the other hand had the entire Castle grounds in which to walk in so they could speak with the felons through the railings of the fence across the felon's courtyard so there was a lot of communication going on backwards and forwards there was obviously some passing of Contraband we have accounts of for example a loaf of bread coming in which when it was stabbed was revealed to have a pig's bladder full of gin inside so there were quite a lot of Temptations for dettas I think to pass things to felons and um to receive payment in return some of the felons would have had more financial resources than the dettor did but at the same time it was also quite a shocking environment you have some of the detor who've led a fairly sheltered respectable middle class life and they suddenly find themselves at close proximity with people who are wanted for horrendous crimes and it was one of the the proximity between dettas and Felons was one of the issues that caused that agitated prison reformers in the late 18 century and early 19th century saying we need to keep these classes of people more separate 7th of October 1842 William Whitaker a deta 24 hours solitary for threatening to strike John Hutchinson another deta the said John Hutchinson informs me that John townend A deta has been selling ale for some days and had a qu of bottles hid in a certain place I sent George Thompson second TurnKey to search for the bottles he found 17 this ale was not fetched in the regular way but purchased of other detas as having the county allowance of bread which bread I have directed to be [Music] discontinued if you live in York today and have local ancestry you may have had a relative who was a deta here Carl shelto has been tracing his family history and believes that one of his ancestors spent some time in York deta prison well the predominantly most of my family come from uh the tadcaster um West Yorkshire and also a weatherbe area I believe that uh the William sh ATO who was in yor ceta's prison is related to my family he was um a solicitor believe he came from leads but say the area he worked in was around weatherbe area tadcaster Leeds area York that's coincides with the entries in the documents that he prepared for I think it was a 3-year period Well it's uh quite amazing really to see uh you know say a distant ancestors own well something written in his own hand and the clients he dealt with and it'd be most interesting to find a lot more about him well having a A deta in the family and somebody who was in prisoned for that uh mixed feelings on that nowadays debt is much more uh accepted I suppose than it was back in those days if I if I am to feel anything given his profession as a solicor and the reputation of solicitors I don't know which is worth a deta or a solicitor to be honest with you 17th of January 1844 Reverend JG houndsfield a detor receiving the county allowance has received a piece of ham sent per Railway which I allow him to have as were I to detain it at any length it would spoil but I have intimated to him that no provision is to be received by those debtors who receive the full County allowance and if any more comes I shall not allow him to have it without he consents to give up the provision provided him by the county he is a deep designing man and will use every strategem to evade the rules of this jail at York Castle Museum today some of the original deta sales are now galleries showcasing exhibits to visitors but behind the scenes many are being used as Museum archives and offices for staff who work here but what's it like to work in a building that used to be a prison it's quite interesting working on this site because you do in a way you do feel like you're working in a building with a lot of history and past when I arrived I discovered next to my desk is some graffiti there is graffiti all over the building from the detas this is very beautiful graffiti beautifully script written and it's um William BL from York it says 28 days came in August 11th 1883 goes out September the 7th 1883 and I kind of want to write next to it came in November 2011 going out April called 2014 because that's kind of my plan um but I don't have to be here really no one's got me locked in a room and making me be here when I was told I was moving into this particular office I was the first one to kind of bags this desk because I love the view out of the window at the um at Clifford's Tower and I just think it's it's a great view to have out of this particular building uh but the other thing I quite like like about working here as well is on your way out of the building I don't know whether you've noticed there is a a stone um with graffiti on prisoners graffiti and it refers to this place this prison is a house of care a grave for man alive and I quite like that idea a grave for man alive and sometimes I think I mean a grave for man alive it is a horrible building it is dark smelly narrow claustrophobic and uh uncomfortable sometimes too hot sometimes too cold and really difficult to work in the strange thing even go but half the people in York love it because it's their cast Museum and it's been in their lives and they've come as children and they've come in with as parents and they're coming as grandparents and and it's wonderful but actually you go back for and enough it was horrible where people were executed and died and lived miserable lives and and had a horrible time but actually to work in it you you also get a little bit of free on that but it is a very very difficult space it was never designed for modern living at all when you're working in here um you do definitely feel as though you're in quite an enclosed space and sometimes that can feel quite claustrophobic um the way that sound alters and changes around the building um does is quite Eerie at times you hear patches you don't hear patches you very much um centered in your own little space it's quite a strange building to be in I always think I've worked in quite a lot of historical um houses and period properties and I always think that buildings do take on the feeling or the atmosphere of their original purpose there something in the mortar or the bricks or the stone that absorbs the stories from the past um and I find it quite an intimidating building I find most interesting that this an office I work in has been a cell for debtors it has been the family home for military warders it has been a place where people dished out allotments and it's been my office or it's been an office of a curator for a long period of time so four walls a floor and a ceiling and a couple of Windows have seen 300 years of people living in that space for very very different reasons and some of left graffi on the walls I'm very tempted to uh I'm sure I get in trouble uh which you know from 1800s still stand the test of time so for me when I'm bored or when when like when works too much and you're thinking a ped work you think actually you know what I'm in a space that's had a life of 300 years and people have been incarcerated in it people grown up in it people have pushed paper in it you know actually that to me is a really exciting thing 15th of October 1841 Mary morson committed here for one calendar month for attempting to bring spiritous liquor into the detter's prison was delivered of a female child this night 15th of October 1841 so debt is nothing new and thankfully we don't get locked up for it anymore but we still have a culture of debt in our society and many of us today might have ended up in a debtor's prison so will we always have debt we always have had debt debt isn't necessarily a bad thing it's economists would say it's an intertemporal reallocation of resources and that simply means that we want to consume today rather than consume in the future and when people take out a mortgage to buy a house that's exactly what they're doing they they want the benefits of owning a house today and they borrow the money to buy it we always will have debt I can't imagine any circumstances in where we don't have debt indeed an economist would say it's optimal to have debt 6th of April 1843 Mary Moss deta was discharged this day having been an inmate of this prison for 24 years and 7 days [Music] [Music] |
english_literature_lectures | Charles_Dickens_Part_1_of_3.txt | [Music] in front of Buckingham Palace the Memorial dedicated to Queen Victoria reminds us of the England of the past where during her Reign Britain was the ruling industrial Military and Commercial force of the globe just after the Napoleonic Wars the country profited from an era of peace that permitted it to devote itself entirely to the Industrial Revolution and becoming the most important industrialized nation in the world [Music] this Industrial Revolution that favored science and business allowed for the emergence of a rich middle class that thanks to the great Reform Bill passed In 1832 shared political power that up until then had been an aristocratic prerogative it is during this period of economic scientific and political flowering that the 18-year-old Victoria became Queen of the greatest industrial Naval and financial power on Earth Victoria ascended to the throne the same year that a 24-year-old parliamentary journalist published a Serial novel that became an instant [Music] success the pck papers was an astounding literary debut for Charles Dickens the most popular writer of his time and considered by some to be one of the greatest British writers on a level with Shakespeare if it hadn't been for pck papers which made him famous almost overnight I would never have been born and therefore pck papers to me is in incredibly important um he was 24 he was beginning to write short stories he was a political journalist pwi papers was the very first book that he ever wrote it was written every month another three chapters were written that is it was the first time any new book had ever been written like that so he was the inventor because of pck of the paperback he was the inventor of the saap Opera because he left everybody at the third chapter waiting to find out what happened so they'd have to buy the next part but they weren't selling very well only 400 C every month were sold and that's not very much it doesn't pay for all it's worth so the Publishers should have said cut but they didn't how he kept the Publishers publishing his books and all he said was okay fine we will cut the pictures to two I will double the number of words and you you'll pay me more now he didn't alter the number of copies of pck being sold uh pck couldn't do it it was when Sam Weller came in Sam Weller was the servant to Mr pwi a cockney and Cockney speak funny English and he came in and from 400 copies each month they sold up to [Music] 40,000 so it was that England's greatest Queen began her Reign at the same time that Britain's greatest novelist began his career Dickens writing constitutes an exceptional witness to his age more his incisive and moving criticism of the inequalities of Victorian society would give birth to a new phenomenon a social conscience for even as the Industrial Revolution mechanized the entire country leading to Prosperity unprecedented in history the cost of this progress was a radical difference between rich and poor members of the city and businessmen working to build Financial Empires formed a social Elite in the West End they began to imitate the aristocracy educating their sons in the right schools and expecting their wives and daughters to spend their Leisure Time at High Society Gatherings for example at a ball the guests were subject to a very strict code of conduct the respecting of which confirm their status as a lady or a gentleman men were expected to wear black trousers vest and jacket with a white shirt and [Music] tie as for women they were to wear white dresses and the most beautiful Jewels while avoiding low necklines they were to be accompanied by a chaperon who undertook to see that her Protege did not commit a social Gaff that would Sully her reputation physical contact between ladies and gentlemen was judged unworthy good manners demanded that both sexes reserve the expression of their emotions for flowers dogs and [Music] horses created by Queen Anne nearly 300 years ago the Royal races at Ascot are still important today it is there that you can see the measure of importance the British people people and gentry still attached to the royal family and its [Music] traditions for the 4 days that the racing meet lasts the crowd is welcome to observe the traditional Royal procession along the 6 kilm that separate Windsor Castle from Ascot for the lower classes confined to a reserve oberved area of the clubhouse this procession and these races offer the opportunity to watch celebrities and royalty at close range they watch them showing off in the Royal enclosure one can clearly see the division that still separates the Gentry and the lower classes aristocratic Flem and Reserve contrast with the cheer and rowdiness of the boisterous crowd the Ascot meat remains a living tableau all Victorian manners if Dickens abundantly described the way of life of this Elite middle class which he belonged to he was one of the few who was sensitive to the miserable fate reserved for the working masses in the capital attracted by industrial activity people from the English Countryside and many Irish fleeing the rampant starvation in their Homeland moved to London however the city simply lacked the necessary resources to absorb this human tide so it was that Charles Dickens lived practically his whole life in a city where the sewers and the open a cesspool spread their myasthma in the polluted air the stench of the Dead Rose from City aaries 50% of children under five died from malnutrition or disease life expectancy was 40 for the middle classes it was only 22 for the working [Music] class and when my friend the American poet Henry Wadsworth Long Fellow came to visit us I decided to take him to see the slums I remember that Forester my future biographer and mccl the painter who just completed a portrait of me that I like accompanied us on our Expedition I took them through narrow alleys dead ends and back streets I wanted to see a tenement the neighborhood was overcrowded with people living in poorly constructed buildings with no ventilation and no bathrooms in certain places excrement was spread about in the rooms hallways basement and Courtyards with such density and thickness that it became almost impossible to move the normal stench of this Zone theft and vice was such that my poor friend mccl fell [Music] ill and when I think that when Oliver Twist was published I was accused of invention and [Music] melodrama from Oliver Twist his second novel onwards dickens's social preoccupations are as clear as a watermark in his writings his commitment towards a great social equality led many to associate themselves with a more radical political element The chartists Who demanded the adoption of a Charter of Rights for the people the charter would guarantee the right to vote for men and a secret ballot rather than a show of hands in any case dickens's sensitivity to society's parias sprang not from his political convictions which were rather conservative but rather from a personal memory of a childhood drama which haunted him all his life his father John Dickens was imprisoned for debt in Marshal SE and his mother chose to go live with her husband in prison obliging Charles's younger siblings to accompany her Charles at age 10 had to quit school which he loved and work for a shoe Hax manufacturer 30 years later forever marked by the spectacle of misery and wrecked human lives he encountered at Marshal SE Dickens wrote little dorit describing the infamous conditions of Victorian jails we're now standing in the grounds of what was formerly the Marshal C prison which Charles Dickens wrote about in little Doris it was also known to the people inhabitants here as the Marshall sea clink and the reason it was called the clink is because most of the prisoners were in chained the dors and as they walked along their Chains would clink hence the name clink there were two types of deta who would come to this prison one would Vis people who had a certain amount of money and they would go over to what was called The Master's side as prisons weren't run as prisons as such whoever ran the prison ran it as a business in terms of how much they could exploit the prisoners for if you have money you come into reasonable accommodation you could even have a servant if you were very poor the ground we were standing on would probably go down three or four levels and the Very poorest people we kept at the poorest level and if you owed a sum of money say5 or6 which can in the times of Charles Dickens a substantial amount of money you have to pay the keeper of the prison to come into the prison you also have to pay a fee to leave so exual fact not only did you have to find the5 you have to find the exas which would probably be double and consequently people stayed in places like this for two or three or even four or five years but when Dickens wrote about it he wrote exactly as it was these were awful Dreadful places where they kept people in the most conditions in the name of civilized society with his father out of prison Charles was able to study again when he was 18 he taught himself stenography and 4 years later he became a reporter for the morning Chronicle this was a liberal paper where he developed the reputation for being the best parliamentary chronicler in London it was here he published his serial novel the pck papers the success of the pck papers assured the young journalist financial future at the age of 24 and it is in this neogothic church St Luke of London that Charles Dickens wed Katherine hogart on April 2nd 1836 she was the daughter of the publisher of the Evening [Music] Chronicle the young couple left for a honeymoon in the Southeast region of England an area the young novelist associated with idyllic childhood memories with London it became a source of inspiration for him and its influence can be found everywhere in his [Music] writing it is here in This Old House in chalk that the young couple began their married life and it's here on this hill that Dickens developed a lifelong love of walking [Music] after long hikes in the marshes of grav sand he would stop when he entered the village and raise his hat to salute the strange gargoyle that still decorates the church entrance it was in this little house in chatam the Dickens lived the happiest years of his childhood the market Town held a special place in his works one can find numerous descriptions of in The Tale of Two Cities and the uncommercial traveler traveling to visit his aunt Betsy Trotwood it's in chattam that an exhausted and broke David Copperfield was obliged to sell his clothes for his whole life Dickens hiked the wild Hills of Kent the countryside the cemeteries the hotels and pubs that he found so picturesque can be found in his writing visiting the area one can still follow the perrins of the novelist and his characters it's in cbum that one finds the old church in Cemetery where Mr pck and one of his young acolytes Circle the graves and growed in a lively conversation just in front the leather bottle was one of dickens's favorite pubs several parts of the pck papers take place there today the walls of the establishment are covered in caricatures photos and various souvenirs evoking the memory of the famous author in cooling these two rows of tombstones that tell of the death of 13 children in the same family all dead before the age of two became in the novelist's mind the sisters and brothers of his character pit [Music] |
english_literature_lectures | Oliver_Twist_Parable_Providence_and_the_Poor.txt | um I studied at University of exter for many years I was awarded a PhD um a few years ago it was dickens's anniversary on the 7th of February this year by Centenary and I offered uh for my local library in Ron to do a talk for them um and Dickens was very uh was a speaker at the first Free Library to be opened in this country that was in Manchester in 1851 and I'm a great believer in as he was in providing education free of charge um that's why the tickets are free this evening I did a presentation for National Library State at the Central Library um so this is the third talk this is not a repeat talk poor old Tim here he's had to sit through too already and I did warn him so it's going to be a change I'm hoping this evening things I'm hoping to achieve is this I'm hoping that that you will go away this evening wanting perhaps to read oliv a Twist Again and possibly to read other dickens's novels with a little bit more knowledge and insight into Charles Dickens as a person and also into Oliver Twist we chose Oliver Twist for a couple of reasons one it's quite short and having studied dickin for seven years I appreciate short as opposed to Bleak House which is about 800 Pages the second is it's very popular it's engraved within popular culture and I'm sure we're very familiar with that the only thing other thing I'll say before we get started um I'm going to give you a perspective of Oliver Twist um it's not the only perspective of Oliver Twist I suspect that a lot of things I'm going to say this evening may or may not be familiar to you so at the very offset I don't want to you to feel that this is the only way to interpret the book so off we go I've entitled to Oliver Twist Parable Providence and the poor you know what it's like when you start with alliterations with P I had to keep going uh but it suits really well Parables play an important part in a lot of dickens's novels hard times for example Great Expectations we're going to be looking at how Dickens uses the parable of the Good Samaritan as a key to understanding the novel we're going to talk about Providence how certain chance events happen to shape the novel and finally a big issue uh for dickens's writing and his work throughout his work was the poor but if I can just start with this speech that uh Dickens gave at Manchester Free Library in um the 7th of September 1852 excuse me for putting my glass is on um ladies and gentlemen I have long been in my sphere a zealous advocate for the diffusion of knowledge among all classes and conditions of men because I do believe with all the strength and might which I'm capable of believing anything that the more a man knows the more humbly and with a more faithful Spirit he comes back to the Fountain of all knowledge and takes to his heart the great sacred sacred precept on Earth peace goodwi toward men so Tim can we put on the first slide there this is where Dickens was born um this year a statue was put in front of that Marlin terrrace now I think called Commercial Road in Port C Portsmouth um I don't know he was born on the 7th of February a Friday the same day that David Copperfield was born you may know that David Copperfield was very autobiographical piece of work his father John Dickens was a naval pay clerk that's why when the Dickens moved it was always connected with the Navy and his mother um Elizabeth Dickens this is the house he's born I just going to give you a bit of information about his childhood what shaped him four months after he was born his father earned 176 a year his father had a bit of a problem he was a very generous man very convivial but just as with Mr mcber he hadn't worked out a basic fact of life income must equal expenditure and with John Dickens um I'm afraid expenditure did not always equal four months after this they moved several miles across the town to Kingston in Portsmouth to five Hawk Street Dickens was to move Charles Dickens was to move 15 times before the age of 16 on most occasions it was due to uh Financial restraints being put upon him some of the things that drove him memories he had a happy child childhood I suppose the happiest days of his life were spent in chattam in ordinance Terrace um he had members of the family staying with him he was regularly read to by his mother and grandmother he had a great love for books he read all the 18th century Classics to BU a Smet Henry Fielding William Golding he was an avaricious reader he also often went to the theater and was taken there by a cousin James Lam mert he had a very active imagination this first thing that he wrote was in 1821 was called misner the genie of India and as this young man grew up a couple of unusual things that were happening he was quite a relatively sickly child and often he would have to stay in his bedroom and once he went his bedroom he would read and read and read he also had a fantastic memory to retain information it's reckoned that he could vividly remember things from when he was 3 years old and he was storing up all this information the things that drove him though he was a Driven Man in many ways the first thing was when his education was taken away from him he had a bit of a problem cuz he had an older sister called Fanny Dickens who was absolutely fantastic on the piano in fact one to place at the Royal Academy of Music now when the Dickens family um when John Dickens was arrested for debt and sent first the king's bench prison his sister Fanny was able to carry on with their education at the Royal Academy Charles however at the age of 12 was put out to work that caused a lot of resentment taking out from education to work he worked in a blacking Factory as a 12-year-old boy basically polish sticking um labels on polish bottles he found that very hard few three years later when a similar situation he's back in school the Wellington House Academy doing really well again the fathers arrested for failure to pay rates and again he's taken out of school at the age of 15 has to start as a Clark in the solicitors nothing wrong at all working as a Clark and sliss as I hasten to add but again his educ education was taken away Dickens unlike other authors at the time who tend to be very rich backgrounds he knows what poverty is he remembers when he's 10 years old how his father takes his books away from him and sells them to a porn broker he remembers something he calls the deed which was drawn up between the Tradesman and his father he actively knew what poverty was the two things that had effect etched on his brain as a child was one he was never ever going to fall into poverty himself and B he wanted to help the poor do someone get me a drink sorry very very dry throat cuz get drink so this is where it all started what a relatively Humble House but this is where our greatest author you know last year does anyone know that it was a b centinary of William fery's birth I don't remember anything being celebrated by that born in 1811 this year is Edward leers by centin who wrote those fantastic lyics but Dickens has placed a stamp on the national Consciousness and I just want the last thing I want to say to you it wasn't just in this country we know about America you may not know in France and Russia he was fantastically well read by doeski in all the intellectual classes and in fact in the 60s when the Russian Navy came into Portsmouth the only place that the the the sailors were allowed to visit officially was his house because they viewed him as someone who's a champion of the working class in the pool when I gave a last lecture a lady um mentioned after the talk that her father had been in the Merchant Navy and the only author the Russian Sailors would talk about was Charles Dickens So Not only was he sorry thank you very much not only was he a writer of England and Britain and America but also of Europe can we go on to the next slide here he is it's not a usual picture I was quite amused when I came over Wednesday to see the picture of Dickens when he was about 25 on the posters we used to seeing aren't we with a beard and this is unusual this is painted by Daniel MCC in 1839 take a look at that person that's someone who's just almost you could say overnight reached Celebrity Status how did he get there few little things he worked as a journalist for three papers the mirror of parliament which was owned by his uncle his mother's brother the true sun and lately the morning Chronicle which which wrestled with the times the two great daily newspapers of the time were the times and the morning Chronicle the times was Tory and the chronicle was wig or liberal he learned his trade as aing shorthand as a young man of 17 and he would be in a very small Lobby writing down Shand the great speeches of the time the reformat of 1832 he enlisted in the British library and finished his education himself at the age of 18 and by this time before he was commissioned to write Oliver Twist sketches by BOS um the name BOS was his nickname for his younger brother Augustus BOS Moses being the son of the Vicor of Wakefield written by William Goldsmith Charles Dickens had often had a block nose so when he said Moses it came out as boses and then he shortened that to boss he wrote Under the name of boss for three years before people knew who he was he had to tell them who he was because there was all sorts of speculation going around who it was I I I hasten to add very quietly are there any fans of the Bronte sisters here I'm a fan of the Bronte sisters they had the same problem if you remember they published under name of cureis act and Bell they had to come out because people were speculating that women couldn't possibly write novels like that and it must be men so Dickens came out and explained who he was just to give you an idea of the relative success he's enjoyed 1836 the sketches he's been writing when a collection of papers the monthly magazine the very first one he wrote in November um this November 1833 through to 1836 the first um sketches of a boss they were all collected together um with 35 sketches illustrated by George crook shank the same person Illustrated Oliver Twist John macaron the publisher in 1836 paid Dickens the princely sum of 00 for those sketches bear in mind three years earlier he was paid nothing for the monthly magazine subscription two years later Chapman and Hall who wanted to republish sketches by BOS very short stories recommend you read them paid £2,000 what had changed essentially the pwi papers the pwi papers were started in 1836 the fourth installment in July 1836 introduced a character called Samuel well if of course you follow the diction it was Samuel well that one character projected Dickens almost overnight into literally superstardom he left the morning Chronicle 19 1936 and he joined Bentley miscellania to edit it where Oliver Twist appeared Dickens was many things but he was a very very astute businessman no one pulled the walls over his eyes he was one of the first person to introduce International copyright what was going on with his work is he would write a sketch and someone would copy it and change the title and could do that at adum in this country in America he was a shrew businessman so he said to John Bentley I'll edit your magazine for you if you pay me five Guin a week I will come up with a story to sell your publication and the story he came up with of course was Oliver Twist you see that man he suffered and he's destined for great success of course I have an oil painting commissioned by Daniel MCC in 1839 would have cost him a deal of money next please I just want to talk to you about Dickens and Wilshire if I may does anyone recognize this I'm sorry it's a bit fuzzy where would that be wagon and horses that's right it's been there about 1665 on the A4 London Road unfortunately for us in Ron and in wooden Basset we're not on the A4 the old London Road so when you read William cob in his rural rid when he got on his hor road across the country he went to come but he didn't come to wooden Basset cuz he went down the A4 what did he miss at wooden Basset I don't know but here we are the A4 Dickens as a parliamentary reporter in 1836 had to go to Bristol and to exitor to record the speeches of Lord John Russell he stayed at the wagon and horses and in pwit papers there's an interpolatory Tale in other words an extra story called The Bad man's Uncle which is set in the wagon and horses the other issue he had in 1847 he wrote The Hymn of the Wilshire laborer about laborers were forced to leave the country and to travel to America if you read Martin CHT you will know that Mr Beck sniff the villain lives around 25 miles around soulsbury so if you imagine Solsbury in the middle and do a 25m circus that's roughly where Martin chwi is set or the scenes of Mr Beck snith have we got anyone from KH here I got to be very careful about KH because KH if you go on the website will tell you that they have a very strong connection with um Dickens because they say that Samuel pck was a name taken from someone in ker far be it from me to cast dispersions about the validity of that statement but Moses pwi was in fact a bath coach Builder not the name of a boy that was left in a basket somewhere in caution but if you pass that tail on you didn't hear that from me so wilcher does not um as you wouldn't be surprised feature much in Dickens work Dickens was a sensibly a London based writer okay but there's a little bit about Wilshire in there so Tim we you go on this is the cover the first thing I want you to no is the original cover it's dated 1838 Dickens the businessman realized that if he released his work in installments you and I would have to pay for the installments one by one and of course we want to buy the book when it came out as well this is very interesting this is an 1838 Richard Bentley the name of the publisher you'll see Oliver Twist um or the subtitle is the parish boys progress very important because one of the other things that Oliver Twist is as a parable is a take of the of Pilgrims Progress that's why he used that title that's not often used now as you see that the other thing I want to say to you there's some fantastic book on on Dickens over there if you're going to buy any books on Dickens may I make one recommendation to you buy the books with the illustrations in illustrations are very important as we'll see in a minute to understand what Dickens is saying every illustration that appeared in dickens's work Dickens took control of if you buy works of dickings buy illustrations and if you want some tips about which types of books to buy I I'm more than happy to tell you later this story was unusual you see most of the the novels coming out of the time were called Silver Fork novels they were about the rich a bit like the Jane Austin tit and I am also May hasten to add a great fan of Jane Austin as well this was undiff because it wasn't about the upper middle classes it was about the criminal underclass predominantly of London it was about social conscience it was about themes people had never read before and it was not well received by the critics partly as we'll see later they didn't actually believe that what Dickens was writing about was real so it was a groundbreaking novel pwi papers was a wonder who's read pwi papers isn't it a wonderful story a Pick X Mr pwit travels around and it's just a series of episodes comic episodes although there's some very serious points in pwi papers this I would argue is the first serious novel that Dickens undertook Tim this is a picture probably well known I don't need to interpret for you it's called the parable of the Good Samaritan this is the key and I will explain to you why in my view of understanding Oliver Twist in the postmodern Society we live in now in the secularized society when BBC or ITV or any television program adapt Dickens the first thing they extract is religion now that's unfortunate it's unfortunate because if you don't if you take religion out of Dickens you cannot fully understand the novels so let's try and understand this story we know the story from the Bible I trust but a man goes down from Jerusalem to Jericho he's attacked by robbers and left lying on the road for dead three people come past one is a priest one is a levite or lawgiver and the third is a Samaritan someone who has nothing whatsoever to do with a Jew and it's the Good Samaritan who helps him Dickens would know he's very familiar with the Bible very familiar with the book of common prayer the question preceding that as Jesus taught that was who is my neighbor and this is what Dickens was trying to put across we know Oliver Twist I trust the first seven chapters of Oliver Twist are about how the poor are abused in the workhouses Oliver Twist when he's born by the brought into Life by The Parish surgeon who likes a drink bit like Sarah gamp he is laid in front of the fire his mother has died and will he live or will he die who cares who cares but he struggles and comes back to life Mr Bumble the fat overweight Beetle comes along gets his book out and says s t Oliver Twist in the age of nine he stays at the farm the parish which represents in those days not now I appreciate the church the priest has failed Oliver Twist who's coming along next well if you remember Dickens gets involved with Fagan and Jack Dawkins or The Artful Dodger and Charlie baits he doesn't realize that Fagan is an evil person he thinks he's a very nice gentleman and he doesn't realize that going around playing a game taking handkerchiefs from people's pockets is criminality and they arrive in a courtyard in clerkenwell and just at that point good old Jack Dawkins tries to take a watch from an elderly gentleman when he fails of course Charlie and Jack leg it Oliver stands there dazzled in not knowing what to do and he ends up in court under the gentle care of the police magistrate Mr Fang the law what sort of respons is he going to get from Mr Lang sorry I said Mr Lang is actually based on a character that Dickens met when he was writing Mr Lang of hat field magistrates Court the age of criminality in 1838 was 14 there were three things that could happen to Dickens sorry it could happen to Oliver Twist one he could be transported for something he didn't do two he could be hung or three he could be given hard labor any of those three would have ended Oliver's life at that particular time the law has failed as it was failing Oliver and the type of children that he represented and the community of the poor the church in the form of the Paris was failing who was going to rescue Oliver The Samaritan the rich man Mr Brownlow we need to understand culturally the rich had nothing to do with the poor the rich predominately had no idea what was going on with the poor let me give you an example In 1832 50% of the children under three died in Manchester from chera and tyo because of poor sanitation when they designed that City it was designed intentionally and Frederick writes about this in the industrial classes if you want to read it they designed the city so that those who were rich in their carriages would never ever go past the poor places in Manchester they didn't realize what was going on there was a segregation between them if we were to go to St Bartholomew this is a lovely Church in 1830s 1840s what is likely to happen the social divisions will be mirrored in the church the poor in the workhouse would either be in the benches at the front or they would be in The Gallery at the back wealthy people would have their own box pews or their own separate pews some with curtains why we don't want to see the poor the social divisions were not seen to be wrong they were etched deeply in society Dickens is saying this Oliver is left and in the book lying on the road outside the Magistrate's court the law has failed him the church has failed him who will save him but Dickens uses that analogy not once but twice as the story moves on who is it that brings light upon Oliver's history who is it that rescues him Nancy The Prostitute I'm going to tell you something about Nancy I don't think you know if you do know I'll be quiet now how about that is that a fair deal Nancy could read did you know that Nancy could read just before she's going to meet Rose meley and Mr Brown low on the bridge London Bridge she says she has read something that is very very unusual 1837 40% of children in this country never went to school someone of that nature to be able to read was very unusual she was a social outcast and yet she was the one that help Dickens uses the power of the Good Sam from very early on the paral Seal of the parish where Oliver is born is the Good Samaritan Mr Bumble shows someone look at my coat I've got a special button it's the parable of the Good Samaritan this is what Dickens is saying of course he's saying to his middle class readership who is your neighbor who are you going to help Tim now again I do apologize um the focus might be better this is why I said about the illustrations a few moments ago this is Mr Brown low this is Mrs bedwin his um housekeeper this is Oliver Mr Brown is looking a picture here I given the game away aren't I what's that picture blow it up look at it it's the parable of the Good Samaritan in that picture that's why we need to look at the illustrations Dickens is demonstrating and reinforcing that message Mr Brown low is the Good Samaritan Oliver is the person that is left tip and here we have the scene on London Bridge with Nancy Rose M and Mr Brown low and Noah claypole here Noah claypole is an interesting character Dickens uses him to say that even among the poor there is L Noah clayo is a charity boy he's been brought up in a charitable institution which is one step higher than Oliver Oliver is the worst of the worst he is an orphan Agnes Fleming's mother was gave birthing without a father and he's from the workhouse Noah clayo calls him workhouse and this is where Oliver is and the first seven CHS of all the twist just relay how difficult it was Tim this is now Providence now you will say to me Dr hoop all writers use chants in their novels marann dashwood in Sense and Sensibility she's off to chew Magna up the hill there's a thunderstorm and who rides to her rescue Mr Willoughby chance Jane air runs away from Edward Rochester she's all on her own she's on her last leg she knocks on the door who opens a door it just happens to be a relative George Elliot and Adam beid we've got heti sorrow carry on with Arthur donor the son of The Squire and who comes along the corner Adam beid yes Writers Do use charts because if you're going to WR without chance you're not going to get anywhere I just want to just say something to you though particularly what I feel Providence Dickens sees it takes it to another level what he believes to be Providence if I can just as a prefactor remark there's a big argument going on in the church at that time now to us we may feel today that the church isn't relevant I want to tell you when Dickens was writing the church was right at the center of all that was going on Dickens readers were incredibly religiously literate and what the argument was going on at this time was special Providence or general Providence General Providence was a teaching that God sets the world going and then we're just left to just carry on special Providence is God's hand is upon particular people at particular times in my opinion it's not just personally my opinion Dickens is trying to put across an argument for special Providence in Oliver Twist Oliver is a special child he's special because he's Incorruptible because Fagan's been employed by Mr monks his half brother cuz there's a will float Dickens there's always a will isn't there and there's always a way of course but there there always a will floating about in the background yeah there it was and in this will Oliver's father written that if his son was ever convicted for a crime all his money and he was a rich man would go to monks so monks pays Fagan to cause Oliver to corrupt commit a crime but Oliver won't do it cuz he's Incorruptible at the age of nine children were taken from the farm the farm where children up to 9: before the workhouse and then they were sent out to work in this scene we have a wonderful gentleman Mr gamfield wasn't until 1839 it was illegal to send boys under 16 up chimneys Mr gamfield was a chimney sweep his rais on Detra was send the boys up the chimney and if they don't want to come down we'll light a fire underneath them the lovely parish is wanting to sell Oliver to Apprentice of Mr gamfield and just about the moment the Magistrate going to sign he knocks over a bottle of ink and he's saved you could say out of The Frying Pan Into the Fire next destination is that wonderful Undertaker Mr sourb Mr sourb and Dick and Oliver is employed as a mute to walk in front of the funerals just a little anecdote for you if I may dickin wrote about what he saw in Oliver Twist Parish child someone in the workhouses died in 1837 there were 48,000 funerals 27,000 of those were children under the age of 10 incredible and those were funerals that were recorded by the way in the novel the clergyman comes to and I wouldn't suggest for one minute the clergyman today would do this but the clergyman Comes Along 2 hours late because he's only a poor person he's not going to get paid for that he comes 2 hours late he spend 5 minutes over the paa's funeral paa the poor child and walks away when Dickens included that in his novel he got a very nasty letter from a clergyman how dare Dickens suggest that clergy within the established church would bury a poor child in that way and dickin sent back a very interesting note saying Thou Art the man because Dickens had seen that very clergyman doing exactly the same thing in Cooling in Kent interesting here we go next thing for Providence if this is the incident I was telling you when something's going to be stolen there's Charlie Bates there's Jack Dawkins there's Mr Brown looking at the book there's Oliver not having a clue what's going on in the background is lurking Just Around Here monks this is the moment he's going to gain because the minute Oliver's convicted he will get the fortune it's a key moment in the novel but of all the people of all the people that Charlie Bates and Jack Dawkins are going to Rob it's Mr Brownlow Mr Brown low the best friend of Oliver's father Edwin leaford it's a key moment now you may choose to say well Dr ho that's very interesting that's just chance I think in the context of the whole novel I would suggest you know it's more than that it's Providence so we've done Parable and Providence Tim we need to move on I am very conscious and aware of the time my greatest critic on time has gone out my daughter has gone out for a meal this evening so I'm relying on my wife to make various signals when the time comes now some of you may have heard of Sebastian FKS I know the head of English where I work has heard of him and I have discussions with him about that Fagan sesan folks would suggest to us that Fagan is a rather Pleasant elderly man he's not as bad as we may think I want to say to you if I may that Dickens doesn't agree with that it does frustrate me a little bit when writers and television producers comment on Dickens without actually possibly reading the book in its entirety if you see this picture here Fagan I would suggest to you throughout the novel is described by Dickens as a loathsome reptile when he's he seen with Sykes and Bill Sykes is a hard man as we know he describes him as the devil if you read carefully Fagan is described as the devil here it seems quite a convivial scene doesn't it you know but look at what he's holding here symbolism look at the fire and Fagan is the worst type of criminal the corruptor of children who was actually based on a real life character um who was around in 1831 he achieved a bit of notoriety cuz he was this man was arrested as solemn was his surname I think it was Robert Solomon he was an Irishman he was arrested but escaped and he was a bit of a a celebrity person not a Jewish person but he was actually Irish so that's where that came from so and in the end we find that very powerful scene do you remember when Fagan's in Newgate and he's just about to be hung the next day and Mr Brown low takes Oliver to see the Judgment that it's supposed to take providential things one big theme in Dickens I like Dickens Thomas Hardy I love Thomas Hardy but he's a fatalist I've got to say that to you in Dickens the good always prosper and the evil get what's coming to them and I can live with that George Elliot bit harder for me to understand that but there we are next please slide to him because we are moving on right the poor how bad was it I am not going to be able to stand here in wooden Basset on a lovely day and help you to understand understand how bad it was for the poor these are pictures and you can get these off on the internet as as I've done there was a um a French artist called Gustaf door he was employed in 1859 to spend five months in this country for three years and go and paint what he actually saw and one of the things he used to paint was the plight of the poor in 1857 it was estimated that 100,000 men were homeless 18.7% of the population which would work out to be 3.75 million people were dependent on the poor La it was terrible it was terrible a rich wealthy man living in London and his family in 1840 would expect to live to be 44 expect to live if things went well someone who was say a middle class A Clark or something like that would expect to live to 28 a Tradesman or someone lower their life expectancy was 22 a carpenter which in my opinion is a very skilled labor would have to work 70 80 hours a week to support his family and at the time Dickens wrote what did the government do there was no there was not much Charities at that time most of the ch's kit in 1851 they weren't about when he was writing you see there was a big problem London was the only city in 1801 that had a population of over 100,000 by 1901 there was 15 in 1850 400,000 Irish people came into the country predominantly Bristol Liverpool and London the population of London which is the biggest city in Europe grew GRE by 20% every 5 years the population of the United Kingdom in 1801 was 16 million by 1901 it was 47 million at the time of Oliver Twist was written 75% of people lived in cities 50 years before that was only 25% and who was supposed to deal with the poor the government no the parish was supposed to deal with the poor but they had no way of doing it so what did the government do they brought in the poor law Amendment Act of 1834 CU they had a theory you see the theory was this partly by Jeremy B utilitarian the poor were poor because they enjoyed it the poor were poor because they enjoyed it what are we going to do about that I know what we'll do we'll make the workhouse so bad that no one want to go in there now the poor didn't like the workhouse if you read our mutual friend you know that Betty Higden an elderly lady does everything she can to get away from the workhouse what would happen if your family was taken into the workhouse prior to 1834 they would be cared for in the community post 1834 they were put in the workhouse now I like words do you like words what do we have in the foot and mouth do you remember the foot and mouth disaster there was a big outcry because they were killing healthy cows that was terrible wasn't it what did the government do we call them potentially infected cows oh well that sounds a lot better but they're still healthy cows in the Gulf War we had not bombing hospitals and schools it was collateral damage language is really important you know in the Bible and the Book of common prayer there are specific responsibilities for for the poor we won't call them poor houses we call them workhouses we won't call the poor poor we call them popers families will put in the first thing would happen You' be institutionalized husbands and wives split up children split up an awful life Tim time has run away and I want to give you a chance to respond to some questions this is a picture as you can see this picture was referred to by Dean Arthur Stanley the dean of Westminster ABY a week after Dickens was buried Dickens never wanted to be buried by the way at Westminster Abbey he could not stand fuss he could not stand attention it was all they could do was to put up a plaque for in Rochester Cathedral nonetheless the country was not having their greatest writer buried in obscure graveyard and this is the picture that Arthur Stanley Dean Stanley and Westminster ab's talked about it's the picture of Lazarus and the poor now that's less common let me just explain it's a story in the Bible about how a rich man has much food and partying and everything he wants but at his gate there's a poor man called Lazarus who has nothing who actually has to feed with the dogs and what Dean Stanley said of Dickens there has never been a greater writer that has revealed the extent of the poor to the rich he pulled back the curtain that separated the rich from the poor one of the greatest things Dickens did was to say to the wealthy and predominantly his readers were middle class of class this is the true state of the poor and I will finish with this in Oliver Twist you remember the last scenes when at Jacob's Island Jacob where Bill syes is hiding out and he's with Charlie Bates and when he actually falls when he sees that Spectre of Nancy Dickens wrote about a place called Jacob's field in that which was a place that Dickens went to when he worked in that blacking Factory and he was 12 years old he worked five walk five miles to work and five miles back the time he was 15 he knew London like the back of his hand he knew London like the back of his hand Jacob Island was a horrible CER a filled destitute place if you went and visit that you couldn't believe people live there but they had to and such was the ingrained sense of um indignity about Dickens there was a meeting of the public health Corporation in 1841 that actually found that Jacob's Island didn't exist and it was a figment of Dickens imagination Dickens was a great writer he had experienced the poor and poverty and he wanted people to understand that and that's one of the reasons why in my opinion that he's been such a great enjoying writer |
english_literature_lectures | Frieze_Lecture_Themes_in_the_world_of_Charles_Dickens_Part_2.txt | thank you very much um i'm delighted that you're here and i'm very excited to be able to talk to you today about uh charles dickens and the place he held in 19th century britain i have only one complaint kai that's been my place in the speaking order i'm following the dynamic wonderful speaker you had last week so i'd like to immediately lower the various expectations but what i would like to talk to you about i've titled the tale of truth is obviously playing on uh dick and sale of two cities um what we are going to talk about is a kind of series of transformations which are really revolutionary but what's perhaps most remarkable is the absence of political revolution in britain anyway to accompany them what i'd like to say for a task for today is not for me to try to convince you of a certain argument what i'd like to do is invite you to have a conversation so i'd like to tell you a little bit so remind you of some of the key events that happened in britain during dickens lifetime and then let you know how scholars have disagreed about the interpretations of those events and i'd like to get you to tell me since you're reading the hands well which things you find which scholars views do you find the most persuasive um perhaps a certain alpha might simply take part of a tale of two cities it was the best of times it was the worst of times one of the most famous beginnings of all models and indicators gives us a really important clue about how he viewed his own era this long part here you can see the second part that will be in gold prints he says in short the period then the revolutionary period in advance was so far like the present period that some of its noisiest authorities insisted on it's being received for good or evil in the superlative degree of comparison only dickens like those in france lived in a truly revolutionary era so how does life fit in what has been described as an entire century that resulted in the revolution of life to use leo bale's famous great there are an awful lot of revolutions we'll have our hands full mainly talking about one and a half of these it's first the industrial revolution in this first two phases and the connection it has to the unfolding agricultural revolution which had been done in earlier centuries but which was really picking up momentum during the flight time in terms of political revolutions however the 19th century is right with revolution in addition to the french revolutions and the napoleonic wars there's also there's a second and then a third major wave of revolutions and sweeps across france and much of europe in 1830 and again in 1848 to speak of only some of the major political revolutions at least another revolution in labor occurs early in dickens lifetime when slavery is ended for the most part in the british empire the latter part of his life was one which also was accompanied by revolutions in political events and the shifting power relations inside europe so britain is embroiled in the crimean war together with france and several other powers against russia and it wins it's been a tremendous cause think of the charge of the life brigade if you will from which that point still also there are revolutions in the organization of european states the many german and the many italian states combined the unified precisely towards the end of dickens's life and just as slavery had ended earlier in the british empire so too there is uh an unfolding story of one hesitate susanna holding an unfolding uh story freedom but at least a greater freedom uh russian serps are free in 1861 and in the latter part of the 1860s as well uh slaves are freed in the united states there will be many flies of the ointment however and one of them is that even as many people are becoming freer in places like russia and places like the british empire places like the united states this is also the story of new imperialism britain is busy subjugating new parts of the globe where places are due to its empire they are busy introducing more restrictions on the labor or restrictions on the political freedom of those who become subjects in its empire inside britain though friends did become much more free to shape their own government in a very peaceful way in 1832 when dickens was about 20 years old one of the most fundamental reforms 1830 to reform act it got rid of a lot of political abuses for instance there were things called rotten boroughs where very few or in some cases one elector would choose a representative from parliament those were done away with the the map was redrawn for electoral districts and the franchise was thrown for european standards wide open meeting uh upwards of perhaps 20 percent of adult males could vote still our women that made britain one of the more democratic places of all of europe in fact in the whole world certainly it wasn't enough for a minute uh we'll talk briefly later about the movement called the chartist movement ultimately some gathered close to a million signatures at least several hundred thousand signatures demanding such things as universal male suffrage the movement peter's out though in 1848 the same year that that second great wave of revolutions sweeps europe what's really telling in all of this i think is this untold under emphasized story that while britain is busy reforming in a very unsatisfactory very grudging way it doesn't have a revolution there aren't many heads that literally roll places like france have a revolution in 1848 but they don't keep the republic they get it quickly gives way to napoleon's empire places like russia and austria have revolutions people die in the streets fighting for political reforms but for the most part those reforms are gradually exciting in the rest of europe this halting grudging hugely imperfect set of reforms in written sticks it in fact in 1867 there's another great reform act this time perhaps doubling the electorate about 40 percent of adult bales were able to vote by the uh just shortly after dickens death in the 1880s perhaps 60 percent of adult males would be able to vote still no limit in fact it wouldn't be until after world war 1 and 1918 that adult males and women 30 and older would be allowed to vote well we also have to say there's a lot of nuance here while there is a gradually unfolding story of greater political freedom inside britain the rate is very enormous so for example your chances of being able to qualify for holding enough property or paying enough in taxes to qualify to be able to vote were much higher if you lived in england much slower if you lived in a place like ireland or scotland that's the central contrast then britain gets hugely imperfect reform but it's mostly peaceful it's little wonder that is precisely this era that historians begin to write a story written's past that later we come to call wigish playing on a political party it's a story that tells about the almost inevitable progress that certainly the british perhaps all of humanity were destined to enjoy it's something which is a kind of a heroic narrative which very few historians would tell today but given this uh gradual uh widely to the franchise it helps us to understand something where this perspective on the past comes from if dickens had looked around him at monarchs he would have also seen a bit of a checkered backer when he was a young man growing up the king was george iii who americans will know and probably have demonized from the revolutionary past the british held a very different view of george iii while his reign did end in periodic bouts of madness probably driven by disease and in the loss of the 13 colonies many british remembered him very fondly was a very devout person religiously and a very devoted family person this was for monarchs in europe counter-cultural if you will george iv who succeeded him also had a bit of a mixed record while he has a victory over napoleon it's while he's regents anyway that the british oversee the suppression of demands for surprise political reforms in 1819 several people are killed as cavalry rides into a gathered crown near manchester the so-called peterloo massacre the king further provokes controversy by inviting a kind of discussion about whether catholics might after several centuries of being excluded in effect from public life be allowed to be emancipated he also then stirs the controversy by eventually taking a strong stand against him george iv really helped to drag the reputation of the monarchy into disrepute because he was very much a pleasure seeking fun-loving siberian monarch fighting like his father he in turn is succeeded by willie luther who held who is in his very short reign responsible for a number of remarkable and ambiguous reforms for example the poor law was reformed but the reform itself was predicated on a system of making the poor live in poor houses which were supposed to be worse than the alternatives that they could seek outside the four houses since these standards of subsistence were already quite low by this point in rhythm that meant that life in these houses could be quite miserable so it's well-intentioned for some pretty brutal results likewise finally there are some limits placed on child labor we'll talk more about that in a minute this also is a two-edged sword it's another well-intentioned reform intended to protect children from some abuses in the industrial revolution one of the effects however is to uh further impoverish many of their families who depended on these children wages weren't increased for the working mothers or fathers of these children so in effect they simply have family wages limited another well-intentioned reform was peculiar results much more positive obviously is the abolition of slavery and the 1832 reform act william is followed by victoria who presides for many decades over written um first is a period in which some of the great differences in income uh some of the great uh disadvantages of belonging to the lower classes began to be eroding all that to the good but as we mentioned earlier this is also a period when the british empire is busy subjecting parts of the earth to its rule creating an empire ultimately so fast that as the segment the sun never sat upon him critics of the empire charged this was because god didn't trust englishman in the dark so what i'd like to mainly talk about having gone through some of those political reforms and some of the major events under various marks is much more the social and economic intersection of the industrial revolution with those political reforms um and i'd like to especially get this to think about three major changes that the industrial revolution brings about one is the kind of set of changes that everyone thinks about it's the efficiency and production the changes in technology and industries and i'd like to take a single example the innovation of the steam engine that gets you to think about how that gets applied so for example the steam engine really something like the modern steam engine is devised by 1712 you can see a picture of it up here in the top part of the slide it's originally devised to help pump water out of mines the effect is you can bind deeper you can extract more coal the shafts don't fill up with water nearly as much eventually a few decades later people figure out you can hook up the steam engine to the process of iron smelting as a result in just a very few years iron output quadruples eventually arc right what devised a system where you could hook up a steam engine to a limb and we the result is that in a very few decades cotton output increases according to the sources that indicated there that output increase is 130 full not 130 134 and of course eventually someone figures out you can hook a steam engine multiple carts lay along tracks and create railways when dickens is a very young man when he's 18 there are about 157 miles of railroad track in the whole of britain uh by the time uh towards victoria's at the end of victoria's reign there are about 30 000 miles of traffic we'll talk about the implications about it in just money all those seem like they're too good and indeed there's much good about them but this is also the worst of times even it's the best of times here are a couple of charts to flip for great advances but on the top for example shows you an increase in production over the whole course of the 19th century if you'll notice with apart from a few minor variations the direction is up up industrial production in terms of tons of coal mines tons of iron smelted and so forth rises almost an extra point the wigs those wiggish historians have been happy likewise the bottom chart shows us increasing rates of literacy they the top of the line is for men the bottom line is women eventually they converge towards our present all that is to the good what's the flip side of that much greater industrial proficiency efficiency um here's one one seven contemporaries responded uh how they've responded to that anyway in 1786 uh shortly before dickens is born a number of workers put together a petition they're just four of them dared to sign their name they claimed to speak over half of thousands what they laid out was some of the cost to them of these these enormously more efficient machines you could read the quote which is here the claim is that a single machine in 1786 would mean 12 unemployed beavers scripts who had done a portion of the process of textile manufacturing this is the flip side the darker side also if we even continue to explore that notion of changes in technology and efficiency right we have to put ourselves back into the world of 1812 when dickens is born when dickens is born has dominion observed the fastest a human can travel is the fastest a horse can carry them or the fastest a wind-powered ship can carry him across the water by the time he dies in 1871 there are railroads which can go 50 in some cases perhaps 60 miles an hour there are steam ships which actually initially are not faster than many of the wind powered ships that can sail up river currents even in the ocean where there are dolphins it provides a regularity and eventually a much greater speed when dickens was born in 1812 the fastest most humans could communicate anyway it would have been through a lack over long distances earlier form of the telegraph though have been worked out by 1837 and onwards you can communicate almost instantaneously one example of how this affects communication then is to say for example uh when the internet intercontinental telegraph cable is first laid in 1858 you can you can communicate in just a very few moments it's almost as rapid to them as the internet is to us this cut off about 10 day journey by vote which would have been the fastest that you could have otherwise sent a letter from europe to the united states or vice versa what injuries make of these changes they're certainly staggering in their ability and their proficiency but they're also off-putting in a variety of ways famously later in the 19th century the emperor of austria-hungary francis joseph said for example he hated the telephone because he said you can't beat him on the telephone you couldn't wow your audience in your splendid military uniform decked out with metals set them in your castle these new technologies like the telegraph and later on tools as a telephone uh disembodied communication they are liberating they are democratizing and precisely for that reason they are also very often to those who have a vested interest in the old technologies a second major way that the industrial revolution changed uh things had changed fundamentally how humans work for example many became more specialized in their labor as adam smith had predicted but it also meant the undoing of a variety of special skills people had developed being a weaver was more or less valuable if you were competing against cheaper products made by machines for example this mechanization of labor made for tremendous gains and efficiency again think of the workers petition at the same time it led to a transformation of the way people worked and where they worked in the factory system that's devised we see a reversal the way labor had been organized in prior eras for example just before tickets have been born there have been a lot of industrial production using for example a putting out system now workers are gathered to a central place where you have an expensive machine like a steam engine hooked up to machines this transformation taking them out of their homes where they might have done their labor and moving them into factories was one change but perhaps a still more revolutionary change was what the great historian epa thompson called work discipline in the putting out system people working in their homes or in smaller factories or workshops wouldn't have a lot of control over exactly what they did at any given moment in the day they could have controlled the pace of their work they could have taken breaks when they wanted or needed to they would have taken a lot of their cues from nature from rising in the sun from the seasons outside with the factory system we see a greater degree of uniformity it's that what we see is people being controlled by the clock we see people being transformed and overseen in every aspect of their labor and many workers found this to be very open this led some contemporaries to worry about the effect of the industrial revolution william blake famously in the early 1800s wrote about the dark satanic mills belching out their smoke enslaving the laborers perhaps there was in fact a great deal of child labor but we have to be careful it wasn't new for children to work in fact most people most families depended on essentially every able-bodied person in the family to work in some way to contribute to the family economy but in as people began to move out of rural and agricultural settings in much greater numbers into cities and into industrial work we see a transformation of the kind of labor and in the hours of labor and in the degree of parental control over labor um there are a lot of kids who work uh jane humphreys recently has estimated that in written by the early 19th century maybe 15 of the whole labor force was comprised of children men and women also work but there too we see some surprising changes develop employers often could get away with paying women and certainly children lower wages than men as a result many men who had been bred burgers for their family became unemployed were underemployed women took on a much greater role and in one sense this is the best of times this means a great liberating elements if women have access to wages they in theory could exercise a greater degree of control of influence within the family but it's also the worst of commons in many cases property laws are such that women have far less control over these wages they've been in their lives even if unemployed controlled a great deal of economic assets as well transformations and labor are brought about by the abolition of slavery we talked about and so too in part at least by something that happens in 1832 the senior of the reform act parliament gathers a committee together it's called the saddler committee they interview a lot of witnesses to find out what working conditions are like this occurs then when dickens is 20 years old he's a young man looking around and it's not been that long since he's been engaged in a similar kind of labor himself and in the sadler report i'll show you an excerpt in just a moment the parliamentary committee concludes that there are abusive working conditions and the result is a reform act the 1833 factory act that seeks to improve working conditions the best of times that problem has been recognized and it's been dealt with but it's also the first of times it was hugely a perfect act it had very few people to regulate it to enforce it it requires many other factory acts in all those years you see indicated there to come close to properly regulating industrial labor what do the workers have to say about this less than we might think based on the sources that are left behind but remember the lower literacy rates and also remember this it's very difficult if not impossible for workers to organize in 1799 in the combination act unions had been in effect alcohol there's a brief loosening of this around 1824 until it's reintroduced again even though the restrictions on forming unions and the actions of unions have been eased though after 1824 it's not really until 1867 that unions are finally fully allowed in the meantime many people are promoting critiques there are varieties of socialism the most famous we know of course is that divides by marx and engels they dismissed the earlier attempts by people like robert owens who dickens would surely have known about and who was active in reforming britain as well as the u.s he dismisses them as utopians and promotes his own scheme as sure and scientific after they can death other socialists come along including the so-called fabians who are interested in reforming so let's go back to that child labor what was it like exactly to be a worker especially a young child in one of these factories well not bad if we believe andrew urine who was professor of technology he claimed to have visited many mills in and around manchester what he claimed was there wasn't any abuse of children and instead he had observed that they were cheerful and alert taking pleasure in the white play of their muslims he also claimed that the workers hadn't any interest in unions but the unions bullied them into striking this is after the factory reform act of 1833 in 1832 the sanctuary committee found testimony like this of an adult who now was able to talk about what his experiences have been like as a six-year-old child you could read it as well as on what he tells you is essentially they were weak and this was hardly uncommon this may have resonated with dickens whose father was confined to a debtor's prison for a while and in order to help his family meet ends dickens had to work maybe 10 hours or so per day and those work hours would have been pretty difficult 10 12 even 16-hour days were quite common for ordinary workers in mines and factories at the end of early industrial revolution what about the poor david ricardo devised the famous so-called iron law of wages arguing in effect with law supply and demand indicating that workers but in effect always see their wages reduced to the lowest level necessary to sustain them he argued therefore that if you wanted to intervene create a kind of a state system to benefit the poor you were working against the very laws of economics itself his conclusion then is for the government to do in effect nothing or next enough he argues that the four house houses and the four laws needed radical reform in order to prevent the poor from multiplying others much poorer than ricardo who had made a fortune on the stock market that we demand the vote we design we demand universal suffix remember this is still when uh ricardo was writing even before the reform act of 1832 when 20 of the adult males were allowed to vote marx put it more together with his co-author friedrich ingalls put it much more concisely he concluded the communist manifesto in these famous words workers of the world you know you have nothing believes but your chains the third set of major changes that the industrial revolution occasion is changes in the way people live population grows by staggering amounts i'll show you another chart in a minute but when dickens is born the population of england and wales and scotland is something close to 12 million or so by the time he dies it's about 25 billion or so where do you put them alone the size of cities like manchester and leeds doubled and tripled in size in the space of a generation this rabbit in urbanization called for occult attention to major problems it's difficult to say whether their work was a greater percentage of the poor but the poor become at least much more visible in the swelling urban centers hygiene public sanitation is horrific it takes decades for the cities to catch up to this rapid growth another change in the way people lived is through the growth in your classes which marks any close comments at all observing that there have been many classes in the past but in their era the middle classes and the working classes were swelling in number and in influence especially the middle classes in influence this inclusive claim led to lots of pressure on family life as for example male north early on at work women became red winters english didn't minus tall but it was a certain it did create a certain amount of tension with the bourgeois victorian values of the period so did the rich class stratification early sociologists like uh federal anterios and max viva said this is what this amounts to what we see in the pre-industrial era is a world in which most people live in places where they know most people around them they are in tight-knit communities they don't go very far from where they're born they live in a kind of community instead people move into cities where they don't know each other as well they might not know the people around them they have a transformation they relate only in very abstract and attenuated ways they become a society the experiences certainly were also very different for men and women in the different classes in dickens youth one of the hallmarks of being a middle-class family was having observant at least one if you could afford it another hallmark was having the mother of any children in the family not work for a wage outside the home and this is one of the first times maybe the first time in human history when a very large number of people are able to afford unpaid labor outside the home the way sociologists often put this is to say what we see is something remarkable in the past the site of production where people have earned their money and earned their living had always been very close to the site of reproduction for the family for the middle class at least these two things are brought apart middle-class men often go out into the world out into the city out into the factories earn their living and their family is a place they come back to women are in the middle fact supposed to be a higher moral standing they're supposed to be angels who redeem the men who've had the sharp elbow competition in the world there were no such views for the most part about working classmen who were also supposed to work and had to to sustain their families another rapid change is a change in the perception of time and in need of space itself when dickens had been born before there was the railroad most people indeed never went more than about 30 miles or so from where they were born suddenly with the new technologies available many people do the effect of that scholars have claimed is to essentially obliterate space where you came from suddenly it mattered much less as you quickly moved from one part of the country to the next you could escape the law you could seek job opportunities it made you much more mobile even as your placement class made you much less mobile even the perception of time itself could be altered as people become less accustomed to rising and setting in the sun accused of the church bell ringing to announce lunch accuse of the seasons to signal what sort of labor you should perform they become much more used to the factory belt factory whistle the time clock kept by their managers so it's the best of times it's the worst of times look here's the promised chart showing you the rapid increase in population something must be going right if suddenly a country can sustain a much larger population something must be going right in the cities by 1851 probably more britain's lived for the first time in towns and cities than in the countryside even marx was grateful capitalism for this he said it had free people from what he called the idiocy of the countryside his writing partner ingles had a different take after viewing some of the sections of manchester he concluded that many of the people who were seeking places to 12 employments in those swelling cities were in effect living in desperately poor awful slums both of these prisons exist both of them are with our tickets world dickens is a world which produces the most stunning advances in technology by 1851 and the famous crystal palace exhibition which dickens would certainly have been aware of what we find is written as the so-called workshop of the world it showcases its technology other nations come to learn to emulate britain's great example in productivity at the same time i'd like you to pay attention to the worst of times if you will that chart on the other side shows you the population of england of england going up and up in environment going down why because this is also the period of the great irish famine hundreds of thousands perhaps a million people star uh still more immigrant so what do we make of this scholarship looked at how long people lived how much income they've earned these kinds of things that are we find many many references to the difference right have concluded different things um for example jackson uh in a famous analysis argues it's the best of times and the worst time first the worst there were tremendous inequalities in incomes and in lifespans of people up till about 1867 or so after that jackson found through a really rigorous statistical analysis things start to even out the poor and the working classes start to increase their income fairly rapidly they close the gap what we see then is a tremendous growth and wealth enormously well created in english also a tremendous increase in the size of the middle class by the end of the 19th century britain has probably the proportionally largest middle class in the world maybe 20 percent of the population by the end of the 1800s after dickens death even workers have rising real wages the industrial revolution is benefiting them too not merely the cheaper goods that are available but now much higher incomes that let them be able to afford some of those things what we also see is another fly in the ointment written by the second half of the 1800s is beginning to decline relative to rising industrial powers by 1914 both germany and the united states overtook britain as leading industrial powers so where does this leave us we've had kind of an overview of the effects of the industrial revolution the social and economic effects matched up with the political reforms in britain well it leaves us with a lot of questions that scholars have answered in really different ways here are some of the questions that have exercised did the economic and social changes during dayton's lifetime during this middle of the 19th century add up to a worsening of material conditions almost no scholar would argue things were good in the early industrial revolution dickens is writing ordinary workers what they have disagreed about literally is whether things were actually getting worse or we simply have better records to prove that they were at least as bad as during the earlier eras they've also answered this question how do those social and economic changes fit with the political changes such as extension of franchise um they've also puzzled over this why is it that britain develops very strong unions in the end but it doesn't have a strong orthodox marxist movement we also want to pay some attention to another issue that has exercise scholars how is it that the issues of class and inside britain are connected to issues of class and race outside them the irish for example were considered off to be in the united states as well as in britain to be black because to be white was to be associated with civilization it was to be associated with those markers of in many cases middle class values britain's conquest of large new areas of the world helps to fuel the industrial revolution which eventually benefits enormously the population which has these terrible side effects in the first few decades of it scholars have also given radically different answers to the question of why the industrial revolution took place in britain first was there british exceptionalism was there something unique make it better about the political culture was there instead of a kind of happy accident for britain a series of natural factors like shrinking forests that had been mismanaged that made the british turn to cold innovations coal made them turn to the steam engine and so forth and the british sort of blundered their way into the industrial revolution and then of course there's the central problem why is it that the british who have such an advance such a lead in the industrial revolution why is it that they lose it and become eclipsed by both britain by both germany and the united states so the changes we've talked about fit with a central period of productivity for thick and fly from roughly the 1830s to about the 1860s maybe 1865 or so so what i want to ask you is as you read through those glitch dickens are you reading here's one take it's by pearl mars he argued here's what thickens as well as a number of authors that have done read the parts at the bottom he had argued that they had painted uh middle class as full of presumption affectation petty tyranny and ignorance in the civilized world have confirmed their verdict with the damning at the ground that it is fixed to this class that they are servile to those above and tyrannical to those beneath them george bernard shaw apparently thought that dickens writings were more revolutionary than marx's work capital other scholars are not so sure uh patrick brandlinger for example says this there can be no doubt that dickens was concerned with a factory question throughout his career but the shape which his concern takes in the novels is unsatisfying many readers for it seems to justify rustin's child about his membership in the stink whistle party it's simple to prove that dickens supports that party where his speeches are loaded with phrases british industry so i'd like to invite you into a conversation which dickens are you reading and what have you seen in your meeting that connects to these social and economic changes to the industrial revolution and its unfolding if you want some other resources i point you towards some of these but they're also been here in the library which lisa can i want you to so what kind of questions comments and outrage responses church they truly work and i think that's uh for the most part as they said the story good intentions and some pretty imperfect results um certainly the question was can you speak to the efforts to the efforts on the part of the government to ameliorate the conditions the living conditions the working conditions of some of those people in the factories and mines as well as the social displacement of those individuals and there were indeed lots of good intentions uh efforts and some of those we really briefly went down right one was this set up the reform of the core laws um ricardo argued it was a revolution certainly it is true that the i believe most historians agree with this that's the four houses that were set up down um however well-intentioned they may have been really led to a pretty bleak existence of the port and that was part of the design things were not supposed to be cushy they were supposed to be enough to keep body and soul together but not enough to make you want to stay very long you were supposed to subscribe then to those values that we now call victorian you're supposed to use your own industry your own thrift your own gumption to work your way out of the poor house so there were some deficits it's easy to look back and say well for example some of the efforts by the government to prevent workers from union from unionizing or striking seem exploitative and in the end they may be to look at to try and recover something of the world view of some of those who defended those positions in that era though they might want to look at liberal theories of contract labor the position of those who said they opposed unionization for the benefit of the workers was if you believe it or not that unions reduce the freedom of individual workers to work barbie for their own wages everybody was to be paid the same or similar wage so it was seen as by those people as a terrible front on the individual's ability to use thrift industry conductions will rise above uh less productive workers so those sets of policies indicate really uh carefully reflected and rational choices whether we think those are well-intentioned or whether we think that's a cover for the advancing class interest well their scholars have gradually disagreed is part of that um so for example in his lifetime he um actively raises funds for um a hospital uh to try and keep it from going bankrupt successfully he also in writing in a way that dramatizes the pretty bad conditions of denver's prisons and so forth probably contributed to the closure of some of those institutions lifelines who are great performers like william wilberforce who used his um christian theology to advocate for the abolition of slavery so if you will there's a less a fair approach to reform in this regard many individuals voluntarily organized to try and advocate for change and tickets are certainly a part of that process in that way is really a product of his own world he is not apparently somebody who is terribly interested in socialism even though marx apparently thought he did he was one of several authors who had exposed the middle classes in remarkable ways and there we get into some interesting questions that i i'd like to get your responses on when we think of the author's intent of dancing versus the effect of his reading on you i wonder how you read him and see if somebody but in the end is it kind of affirming the social and political order of written his is a kind of steam valve approach in locking and caricaturing certain aspects he confirms it where is he really underlying what other questions it would appear that fossil fuel or coal was 100 years ahead of the united states and that britain was fueling you know naval ships with steam power absolutely and so this is occasionally a lot of reflectional histories why is it that the british of all people and europe of all continents is at the forefront of this for example uh paul ryan's china might have emerged as a leading power in the industrial revolution why does it happen this way so one response to that is to emphasize the kind of story of nature is that britain being an island couldn't keep expanding its territory very well in europe it had a limited number of forests and it did not manage them very well so it's running out of wood and therefore the ability to make charcoal that will help it refine metals so it has to find some other source so it happens to be coal and coal and written happens to be close to the surface and it happens to be sort of close to the iron deposits too so it may simply be happy accident of history for them that all these things are there other scholars argue well there's something different about the british culture it's more entrepreneurial the uh culture of the moon wall was more deeply established in britain that led to inventors feeling willing to being willing to invest in their research and their products and so forth so there may be something of a human directed uh response there to explain why the british merged with uh initially better technologies and including why they adopt holes and a really great |
english_literature_lectures | Introduction_to_literature_first_recorded_lecture_part_3.txt | okay so so what was the the dawes act what was the daws act or the allotment act well it was a piece of legislation that was passed in the late 19th century in 1887 and it it did a number of different things at least theoretically politically to to native communities i say it through it theoretically because the although the legislation was passed in 1887 it took shape in different parts of the united states and affected different tribes differently over the course of the next 30 to 40 years so it kind of gradually took shape but over on the right these bullet points these are the key features of the the dawes act it was a major shift in federal indian policy uh politicians thought okay we're not going to confine native communities to reservation spaces anymore we're gonna offer them citizenship we're gonna offer them american citizenship and then they will be eligible for all the liberties that other citizens are eligible for uh eligible theoretically for the politic political protection um taking potential legal claims to court uh individually and and things like this so i'm sorry i can't see the little cursor here but i'll just go through these these these bullet points on the right point by point um so as i said the the first bullet point the the act was an effort to incorporate american indians into the national body uh by offering them citizenship the the policy or act the second point notes move to dissolve tribal allegiances and create american citizens which is to say in theory uh it could if you were a lakota you would no longer identify as lakota you would identify as an american citizen or national citizen subject and it tried to do that culturally and linguistically as the third bullet point notes uh the legislation aimed to dissolve tribal ownership of land and give or a lot that's a synonym basically a lot 160 acres of land to individual heads of households and then whatever land remained after that would be sold off to to whomever was interested in it basically meaning white white settlers um the fourth point um i'll come to that fourth point in a minute uh in the fifth point the educational arm of the policy moved to get american indian children down here sorry that's where i am how can you how can you convert all these native communities into american citizens well education would be the fundamental way to do that and english education teaching the english language would be really in a fundamental aspect of that education um so there was an educational arm of the policy as this last bullet point notes that move to get american indian children to study at off reservation boarding schools that is to say uh they you might be raised in a certain community but the idea of the government was to get native children away from their families so they could become acculturated to essentially white middle-class notions of of of existence and they could assimilate the language culture and different an entirely different value system um on one hand i should say i think some people some northeastern politicians and northeasterners genuinely thought the allotment act would be good for native people you know they won't be marginalized they won't be forced to stay on reservations they'll be offered american citizenship and the potential political representations that made at the same time i think some people thought uh you know settlers are gonna take these native people's land unless they understand like land like we do unless they understand land like as private property rather than communally owned property uh you remember i previously said the whole concept of owning land was was somewhat alien to for for example the sioux who are a nomadic people so they they had territories that they claimed or fought for territories but this notion of private uh property ownership was kind of an alien thing to them uh but some some people thought some northeastern style if they if they understand private property like we understand private property then it'll be harder to to swindle them people won't be able to just take their land because they don't know what it's worth or what how it's valued so they thought rather than holding for instance a reservation space in common a collective model of land ownership we're gonna give individual people 160 acres and then they can have little farming families kind of like this jeffersonian dream i mentioned earlier of kind of agrarian democracy so on one hand i think some politicians thought it would be a good thing on the other hand there were genuinely greedy people that wanted to just open up land that they thought was surplus land that that they thought native people didn't use appropriately and it should be sold and so i think that that was a really that was a major aspect of it uh so this is the fourth bullet point i note down here uh arguments that the allotment act would help protect the land by encouraging native people to understand private property ultimately seemed really shallow uh as native communities lost two-thirds of their the land base they had prior to the allotment act um during the time the the policy was enacted and that's that's 90 million acres um so if you look so it's hard to to to think that the argument that they would understand property differently and and retain it holds much water based on that point uh so if you look at the images on the left uh the the top image is uh of a native community i think in nebraska on the reservation and they're lining up to get their individual land allotments given out to them so they're so that the policy is going to in going into effect there the the bottom is a is an advertisement from the period uh indian land for sale it says fine lands in the west irrigated irrigable grazing agriculture drive farming and so this obviously is is saying it there's going to be land that will no longer be part of native communities theoretically they'll be paid for it but but you can imagine the compensation wasn't really wasn't really appropriate okay but to return this idea of the educational arm of the allotment act the policy these were these off-reservation boarding schools and and so in the late 19th century there's a first generation of american indian children native children who were sent to these schools the government would send missionaries out you if if you've read the zika saw she talks about these missionaries coming to her tribe and talking about quote unquote the land of the big red apples these people would go recruiting children for the schools i think some native leaders some american indian elders thought they they did want to send their children to school because they thought the world was changing and they and the schools might help them negotiate this change i don't think they anticipated at all the degree to which these would be really colonial educational institutions with the idea that this that the students wouldn't come come come home theoretically many of them did but theoretically uh the schools didn't intend for that to happen um so the colonial colonial education of the boarding school project that's it was a part of um one of the first schools was the carlisle indian school in in pennsylvania and the founder of this school was not an educator it was a military general by the name of richard henry pratt and in fact the school was founded on a military base and and it was kind of run in a in a marshall or a military manner uh the iron routine that zika assad talks about where a bell rings and people sit down in a bell ring and people stands up stand up and things like that it was part of this kind of military um this first quote the bullet point on the right um was from richard henry pratt and it was his his motto this is a pretty horrifying educational model uh obviously kill the indian and save the man and what he meant by that was was essentially eliminate native culture and tradition but save the save the individual um and you know that's a pretty atrocious thing to think about on one hand um but on the other hand maybe it's it's it's a strain it's a strange thing because um it's it seems culturally kind of prejudice well it's obviously culturally prejudiced and it seems racist um and but at the same time pratt is acknowledging the potential of native people um so it you know it is a horrifying statement um but at the same time he he he acknowledges the potential of native communities and interestingly the first time he he became interested in native communities was he was on the frontier he was a cavalry soldier on the frontier and he had african-american cavalry men with him i don't know if you've ever heard the song buffalo soldier by bob marley but if that song is about african-american cavalry men native american indians called the african-american cavernmen buffalo soldiers um anyway pratt was on the frontier and he saw his african-american the buffalo soldiers african-american soldiers collaborating or working with native scouts and he thought these native scouts are so talented and so skillful out here and he was thinking about african-american emancipation and and the and african-american education and he thought why can't native people have education too so you know this kill the indiana man is a is a very troubling quote obviously um but pratt thought he was doing something good um but it was a it was a pretty it was a harsh colonial system and as uh english language educators are future english language allocator educators all of you should understand and think about i mean i know you think about this now you know you think about language politics but you should be aware of this history of english education as in in a really troubling historical moment um so how did it work the second the second point here children were forbidden to speak their native tongues they were given new english names when they got to the schools their hair was clipped and their forbid forget forbidden to practice any traditional cultural or religious practices and they were also indoctrinated into the christian faith or there there was an attempt to indoctrinate them the image on the left is kind of a before and after shot richard pratt would sell these photographs to try to raise money for the schools and the top images is the same group as the bottom image the bottom image images four months after these children came to came to the schools um and the image is supposed to tell this story you know they've gone from this you know supposedly quote unquote savage state with long hair and kind of you know looking in kind of a motley clothing fashion to this bottom image where you see what looks to be like a an image from a white family from the period um although the but you'll notice they're wearing military uniforms so that speaks to the carlisle indian school kind of having this military history a military history um okay so but i wanna i wanna insist that even though this was a a really troubling educational system and the english language learning was was conducted in a really disturbing manner at the same time we should not think of these native communities as as victims i think we should think of them as negot people negotiating a really difficult historic moment and trying to negotiate it in a in the in the way they bet they they the best to the best of their abilities and moreover negotiating it in a way to turn these new tools language tools or cultural tools to the advantage of their own people and one of the one theorists or native scholar who helped me think about this and understand this period better is someone named gerald visner and in this slide i i include one of my favorite quotes from visner and i titled this slide the english language colonial oppressor on one hand and tribal liberator on the other hand um it was both of these things and so visner visner says before i read the quote i should note in the late 19th century there was like a religious revival across the across the america um called the ghost the ghost dance it was an indigenous um it was like a new indigenous religion that spread from the west coast to the east coast and that um the the dancers these ghost dancers these religious practitioners thought that if they got together and and danced uh in a certain way white settlers would no longer they'd leave there'd be like a massive earthquake or something and they'd leave the united leave the americas and the buffalo would come back and so it was like this revivalist uh spiritual vision and the way it spread interestingly enough across the country was through the through technologies like the train in the english language so the sioux the lakota sent children or sent young adults who had gone to this school and they got them on a train and and and they shipped them from the dakotas to the west coast the current state of utah because they'd heard about this spiritual leader and and they used english to communicate over there and so the gerald visner this american indian theorist thinks this is really interesting this new religion a new native religion and revival was spread by using colonial technologies and so a potent a potentially different future for native people could be facilitated using colonial technologies like the train in the english language uh and so this quote i'll read now speaks to this speaks this reality so visner writes the english language has been the tongue of colonial discoveries racial cruelties invented names the false representation of tribal cultures and the unheard literature of dominance in tribal communities at the same time this mother tongue of colonialism has been a language of invincible imagination and liberation for many tribal people in the contemporary world english a language of paradoxes learned under duress by tribal people at mission and federal schools was one of the languages that carried the vision and shadows of the ghost dance the religion of renewal from tribe to tribe on the vast plains at the end of the 19th century english that coercive language of federal boarding schools has carried some of the best stories of endurance the shadows of tribal survival and resistance and now that same language of dominance bears the creative literature of distinguished native authors in the cities whose literature could be the new ghost dance literature the shadow literature of liberation that enlivens tribal survivants which is to say english can help sustain tribal futures in in native futures so it's a paradoxical thing okay i'm gonna try to end quickly i'm going over my time as as usual uh but you read the introduction to zit kalashash who's part of this boarding school generation um in the autobiographical stories we read published in the atlantic monthly in 1900 described her youth her kind of her her youth outside of the boarding schools being raised in a more traditional way by her mother uh as well as their boarding school experience and they described this to a kind of white northeastern audience reading the the atlantic monthly reading the journal and you can see these images on the left these are two images of zika although note her birth name was gertrude simmons bonnen it was an a is essentially an english name and she gave herself this indigenous name sitka which means uh red bird i think uh later later in life and she was so she was very aware this is a point i want to make she's very aware of how readers want to think about native people in the period and so these images kind of show her dressed in traditional traditional uh clothing but at this point in the 19th century many many native people didn't wear traditional clothing they were kind of hybrid fashion because they'd been contact with with uh settlers for years for instance look at this really quickly look at the image on the upper left here see this this this young native man in the front row on the right he's wearing like you know riding boots and jeans in a in a jacket that's how he came to the school all these people you know so there were indigenous aspects of their clothing but it wasn't like you know they dressed in traditional clothing like i saw here but note she represents herself as kind of totally traditional strategically she does that in her writing and uh for instance we don't hear about her father in her autobiography but her father was a white uh military man uh we only hear about her lakota side and part of the reason that she does that is because the notion of a quote unquote mixed breed or mixed race indian and white person had they were kind of considered like not authentic there was some racism that targeted targeted quote unquote mixed breeds and so she cast herself as this pure lakota person why would she do that well she knew that her northeastern readers would be very interested in this idea of of a native of this kind of utopic native space or person they were very interested in stories about them this goes all the way back to our discussion of of the tempest and how a meditation on native communities served as a critique of european societies um and and people would wax poetic about native civilizations philosophically speaking um okay so this is where we're gonna end i am gonna i eat or i put these questions on facebook these are the questions for you to answer i'm not going to talk about zip codes literature i introduced you to the history give you some context then i'm interested to see your comments now so i say uh although zikkala saw is writing her autobiography she takes dramatic license to make her readers sympathize with the plight of american indians consider you can either answer the first or second question do you see any similarities between her writing strategies and criticism and that of harriet beecher stone consider for instance our discussion of the way sentimental fiction worked emotionally on a reader or alternatively how does zika assad describe her youthful education with her mother compared to her boarding school education how might a northeastern reader who has read thoreau and emerson react to her description of a natural education compared to the boarding school education okay which is to say remember throne emerson nature had been elevated to this new level a person's relationship with nature could be seen as a as a resolution or or a way to get past the problems of indus in the the industrial west native people were associated with nature frequently in this division so think about that okay this lecture went over i think uh we're going to try to skype if only for a minute if you have any questions just email me um but thank you for your focus and again thanks so much for being open to this flexible strategy i know it might be hard listening to me and staring at a powerpoint thanks everybody |
english_literature_lectures | Dickens_and_Education_Dombey_and_Son.txt | G who's a professorial tutor fellow at ANS College she's uh very well equipped in in her knowledge of Victorian literature in general but has a special interest in the child the mind of the child and her book her latest book which was published in 2010 called the mind of the child Mind of a child the child the child the child child the mind of the child um should be available in Black if you feel inspired to get look at that so after further Ado we're going to be talking about education and Duman some and uh our hand to you welcome yes okay so I'd like to start by putting dickings in the context of our own time you probably been aware of the newspaper headlines not quite so recently but at least two or three years ago a lot of headlines talking about the fact that British school children were the most OV examined in the world that endless exams every age and this is what we got to back R show and attend to reduce the numbers and exams that ask do took in addition to that there were also what we call epidemics of anxiety in our classrooms and there were also surveys that talked about the fact that our children appear to be the most unhappy in the developed world uh really F was all there why is this what are we doing and link to that even more gruesome the rise and child suicide now what interested me was the way in which all these discussions were taking place as if they haven't taken place before so what I want to do is to put that sort of concern in relationship to Dickens in the 19th century so I'm focusing you know on Doman published in 1848 the year of Revolution across Europe hungry fors uh coming overway a real it's a talous time I the novel it's fulltime very early um emblazoned on front pages but it's actually called dealings with the firm of Donan son wholesale retail and for exportation so clearly you can see that what Dickens is doing is putting right up front the notion of Mercantile capitalism there's also a novel about the dealings of a domineering father with dby and Son embedded there in the middle of the title so his domineering father with his children but this strand is interwoven with an analysis of a society where as Dickens put it it was natural to be unnatural so d himself is a merchant an absolute product of Imperial capitalism sending his GRS all over the world and judging things only in terms of money Dickens also programs in this novel The Coming of the railways uh it was a time when England was being torn up um Railways were being placed right across the country um transforming the way people thought about their lives and the ways in which they could move around and think about time and space but this was also the heart of the industrial year and with that came all the slums on the underclass created with that so what di shows in the novel is the ways in which the the um charging cre of the trains actually revealed often to the eye what had been concealed to many of the middle class before the apping conditions of the slums that were the outskirt in the center of the Cities so what happened with the coming of the railways is you've got a complete change your sense of time and space because if you imagine before the r r people it would take many days to travel from uh town to town um you didn't have any standardized time so it was known as the coming of Railway time as all towns had to now synchronize their clocks and with it came there with the factories as well a sense of time being organized not by nature and natural rhythm but by the determination of the engine whether it's in the factory or on the the railway line so changing sense then of of time and also of space the fact that everybody can now get to London should they wish and indeed in the great exhibition of 1851 people travel often for the first time the length and breadth of England the sense that now you could move in ways that have not been thought possible but with that also came a of questioning of what it is to be human now that so many pars have changed Now dog insan is a crusading novel it wants to transform things but it's also very funny and for those of you who are were brought up on the BBC um Sunday serials um you'll probably remember that they were all incredibly dark certainly I was completely put off Dickens when I was young cuz all it seemed to be was this houstion going up in flame in a window dark dark interiors and no sense of the comedy so I thought I'd start oops I have to be careful here by reading you the opening of the novel um just to give you a sense of how diing set the scene Doby sat in the corner of the darkened room in the great armchair beside the bedside and some they tucked up warm in a little basket bedstead carefully disposed on the looc immediately in front of the fire and close to it as this constitution were analogous to that of a muffin and it was essential to toast it ground while he was very new dby was about 8 and 40 years of age some about 8 and 40 minutes Dy was rather old rather red though har some well-made man too Stern and pompous an appearance to be prepossessing some was very old and very red and of course an undeniably fine infant somewhat crushed and spotting in his General effect as yet so got this wonderful sense of the analogy between these two red and Bal figures don't be 48 I sorry being tired this don't be 48 years and it's certainly 48 minutes and this preoccupation of time that goes through but uh weo another section now talking about the ways in which doning conceived of his firm D and Son those three words conveyed the one idea of Mr D's life the Earth was made for D and Son to trade them and the sun and the moon were made to give them light rivers and seas were formed to float their ships rains gave promise of fair weather winds blew for against the Enterprises stars and planets circled in their orbits to preserve in Violet a system of which they were the center common abbreviations took new meaning in their eyes and had sole reference to them ad had no concern with anod but stood for anod and some so what you've got here is a complete rewriting by Don Merchant of the whole religious eschatology in the sense that time and an Domin relates open to himself and his son reworking obviously of God the father and God the son now as I've suggested this links with a complete Obsession in the novel with a sense of control of time trying to depict the ways in which the victorians became preoccupied by this control within the novel there are two children Paul who we seen just being born and also his older sister Florence who is six as a girl she's been regarded as worthless by her father as he says it me a base coin that couldn't be invested or add to the capital of The house's name and dignity it was a wonderful representation of the way in which John is so focused on his son just canot almost see his daughter and when he sees her he's disturbed by her both Paul and Florence are children deprived of childhood forced into premature development Paul by excessive paternal expectations and Florence conversely by paternal neglect Paul's sufferings are not primarily physical as with many children in the Victorian age but mental as the wait Don's expectations distorts his psychological growth dony just cannot wait for PA to grow up his dominant feeling which intensifies with time is just utter impatience his heart in auor is capable of taking any impression one might say as an image of his son they not so much as an infant or as a boy but as a grown man the son of the FM and it's with this imaginary adult that donby spends his time in constant Communication in his thoughts dby is tall believing himself to to love his son he cannot reconcile the alter ego who figures in his self communions with a child place with for him he wishes di says to buy off Paul from childhood and as the villain of Peace car later observes Doan son know neither time nor place nor season but bear them all down so the imperialist and capitalist Enterprise which is the firm annihilates all natural distinctions of space and time just translated into domestic policy this attitude produces a yearning for a child who is not a child not productive season of Youth can be skipped or accelerated now in one way Don gains his desire Paul is what diim describes as an oldfashioned child and he has his strange oldfashioned thoughtful way with his precocious moves he's like one of those terrible little beings in the fairy tales who at 150 or 200 years of age fantastically represent the children for whom they've been substituted so we're going back to wonderful fairy tale to be who could get the change means and your child being substitute in keeping with the malign literalism of so many fairy tale wishes Dy has indeed been granted his wish a child who is also an adult the fairy tale changing prefigures in mockingly grotesque form the domestic fantasies of this capitalist patriarch Paul's replication of his father so comically depicted at Birth is continued into his child childhood do we have another scene where both of them are sitting by the fire PA with his old old face both with wandering thoughts Mr dony stiff with starch and arrogance the little image by inheritance and and in unconscious imitation the two so very much alive and yet so long stressful contrasted so a sense here what is happening to little Paul is he been just pressed into the world this unconscious imitation uh and horror of this this monstrous likeness between the two is it that Paul overburdened by all these precious biological inheritance unconscious imitational expectation actually appears older than his father Dickens depicts uh a wonderful conversation between the two where Paul asks his father about the value of money it's one of those moments in dickings that he always has where the child asks in all innocence a question that the the adult then has to to answer it's rather like wwor in his poetry um the where he has in the lyrical BS and we are seven if you know and others where you get a child questioning the wisdom of the adult but Paul is not an incarnation of the poor wisdom the pure wisdom of innocent child living in harmony of nature but rather a distinctly social and unnatural product of his environment in his creation of this oldfashioned child Dickens Drew together a variety of strands from U contempor culture so you've got echo of the words worthy and child we are7 of the child from the Evangelical TR who was too good for this world and so desant from early death you probably know some endless religious verse and stories about these wonderfully good children who they often frequently rejoiced were taken to to God in early years um so he he belongs to that um sort of gen he's also directly engaging with educational psychological debates about Child Development which stretch back to luo in the 18th century at the time of wrting Doon Dickens was deeply preoccupied of questions of Education he recently t around England looking at various education establishments yet where said he could decide where to place his same son Char Charlie and on reach he was traveling to Switzerland he wrote to Lord moror just a week before he started donon indicating his desire for a commissionary ship or inspectorship on any questions connected with the education of the people the elevation of their character the Improvement of their dwellings their greater protection against disease and V or with the treatment of criminals or the administration in prison displ brother right weit he was suggesting he was also in correspondence with Lord John Russell and indeed before leaving and proposed the educationist James K shule with that they should set up a ragged School themselves one diing says Where the boys would not be wearied to death and driven Away by normal corporate discourses and whilst he was in Switzerland which is where his writing D ins son who repeatedly praised the the Swiss schools and who visited a blind in institution where he was fascinated as he had been when he visited similar institution in Boston by the progress made in teaching a blind deaf and done world to speak so is quite fascinated by what he thought was the the potential locked away and children and the techniques that were being developed at that time for enabling those who had previously been just shut away in Sil and ignored to be brought into their full attention in doning son the creat three different forms of Education establishments the P J Paul this first then Dr blendis and then also the charitable Grinders were another figure I won't look too much at Robin Tu his best now while um Mrs pin's infantine boarding housee is not strictly a school Paul is to be sent there for what ter bily and mental training Mrs pitchin herself is held to be quite scientific in her knowledge of the child is character it's known as a woman of system for the children now this is the first time in Dick's writing that he actually bases any events uh in the novels on his own life um and so Mrs piton is based upon Mrs royet uh with whom he lodged whil he was working in the blacking Factory you probably know one of the most sort of important facts about dickens's life is that when he was young his father withdrawn him from school and placed in the blacking Factory and so Not only was the the the the horrors of working in the factory but also the sense of social disgrace he felt being removed from his middle class environment and so through his life he does keep reworking these experiences but uh this is the first time he does it so Mrs royin um who's described as an ogis and a child quell so she carries the weight of dickens's own emotions linking the sufferings of Fairly off um inmates with that of a terrified child who' been cast down from middle class to working class status Mrs pin's designed scientific system of child management links her with a world of mechanization dickin says it was a part of her system not to encourage a child's mind to develop and expand itself like a young flower but to open it by force like an oyster there's a wonderful image of the there the knife coming up spitting the child the idea of following nature in child development goes back to Lua although this specific image of the Mindless of flow was even more closely identified with friedi fo the founder of the kindergarten movement and although the first kindergarten was not to open in England until 1851 and um de was very supportive of it he possibly came across the ran ideas whil in swi because there were various schools opened there in the 1830s now this violence of the image of the oyster forced open by a metallic instrument in order to be devoured is a fitting introduction to an establishment which is describes them a sterile garden and snails adorning the doors like cupping brasses as if the inmates are going be drained their lives B Mrs pitchin system is Dick says to oppose nature at every point so children are frequently sent to bed at 10:00 in the morning and they given everything they didn't like nothing that they did Mr blimber Dr blimber hot house has anticipated in Mrs pin's a very menacing collection of CLS um she has these unseasonal cacti which is described as ring like hairy serpents or hanging from the ceiling as if boiled over now in keeping with her systematic thating of all natural childhood impes Mrs pitchin strongly approves of Mr D's decision to place the six-year-old cour at Dr liis where he can commence his studies in Greek there is she observes a great deal of lense and worse talked about young people not being pressed too hard at first now this nonsense dates back to R so who you probably knows talked about natural education and he actually suggested that children should not be introduced to books until they were 15 he's not extreme one think the sense that they have to naturally develop uh from that you get a sense of you should not give children books too soon uh but although these ideas were taken up they didn't seem to do much to change the ways in which Li classes were being educated um one of the most famous cases in this mine is John Stuart Mill who writes in his autobiography about how he started learning nature at Greek from the age of three U that was somewhat extraordinary but nonetheless they were trying to get children from the earliest ofat before we even get them to start reading to learn Latin and Greek the sense they didn't have time and you had to ensure that they were Galloping through their childhood now advice and psychiatric literature from the beginning of the century had carried warnings about the effects of Parental pressure on children's education as his book and William buckin advice to mothers of 1809 which would many homes would have had had two contrasting examples of the theine effects of our parenting thus a young man NE who had been so coited and protected that at 18 he looked aged 180 and it died at 21 and then there was the case of Isabella Wilson whose fond mother had proudly nurtured her intellectual development which at 14 and surpassed all of the others only for her then to fall into fix and then revert to Childhood so there a real sense that what parents were doing forcing their children and some very wonderful and comic descriptions of parents rejoicing when their child died of over education putting up their memorial tablets in the church yard um so psychiatric texts noted a b overstrained and premature exercise of intellectual Powers could lead to Insanity while encyclopedia practical medicine speaks of the errors in education consigning the suffer to an early grave but interestingly these various references are not developed U and they're more regist I think of of popular social belief than a key platform of argument and interestingly it is actually indic do you find the first really developed study of over pressure and this was to pass quickly into medical literature as a founding case study which I think is a quite a strange notion for us now that actually a case study would be drawn from literature but this was indeed the case after Dickens published Onan Sun you find case of of little Paul dby being taken up in a whole range of medical texts um right through I found it being quoted as the founding study until the 1920s so Paul is to replaced Dr bers both to further his education so that he can take his rightful place in the fir the firmament and also to wean him from his sister because he was very emotionally attached to Florence blimber making a m for is to isolate feminine domain of warmth and emotion associated with France and to catapult him into adulthood those of PA forly reines I'd rather be a child Dr glinda's establishment is described famously as a great hot house in which there was a forcing apparatus incessantly at work all the boys blew before their time M green PE were green peas were produced at Christmas and intellectual asparagus all the year around nature was of no consequence at all now difference points out the Horticultural market and disadvantages of such a Mur of production for was not the right taste about the premature Productions and they didn't keep well the image of education is that the forc production of fruit is to be found in musos and meal nature would have their children before they are men if we try to invert this order we shall produce a forc fruit immature and flavorless fruit which will be rotten before it is right we should have young doctors and own children the foundations of that oldfashioned child cor and Dr blimber Hot House of Education clearly res Russo but Dickens has taken the elements and made them his own turning them into a commentary from mid Victorian age so this hot house is not really an aristocratic glass designed to enhance the natural power of s and produce FR for the rich man's table is dominated by a great forcing apparatus turning it into a product of the great Machine Age The Boys Are Not Just Flowers which go before their time but shrill engines forced to perform incessantly without attention to time or season the text anticipates hard times in the parallels it draws between the grindery of utilitarian education and the ceaseless workings of what um Dickens describing machines called The melan Mad elephants of the industrial machines which dominate cown life and this transformation of organic flowering into mechanized blowing receives it apotheosis in the figure of Mr Toots very comically named whose very name instantly connects it with the railway networks he possesses the gruest of voices the shst of lines having gone through everything he suddenly left off blowing one day and remained in the establishment a mere stalk and people did say that the doctors had rather overdone it with young TOs and that when he began to have whiskers he left off having brains so you got this wonderful sense that to come is a normal infant um develops all this wisdom and then loses his brains in fact he he remains a very ilable idiot throughout the norvel so in Dick's hands Dr blind's Hot House becomes the condensed expression of the Overflow of the natural laws of space time development and then you recognized an imperial age interestingly it's not Dr blimber the head Master himself who seen uh as a particularly thei figure he's not sistic like squ in or um creal in De cotfield same house um and corporal punishment does not seem to figure in the school rather Dr blimber imposes on school life the Relentless attitude to time found in the winder culture as his daughter remarks to Paul don't lose time dby for you have none to spare now this seemingly innocent comment holds for the reader a darker meaning offering an unwitty Propet unwitting prophecy of C death Dr gber is incapable of registering the fact that his young gentlemen are children he regards them as if they were all doctors who were born R up and his forcing system with its constant pressures at work provides a frantic Dash through the Rewritten ages of man so the pupils have all the cares of the world after they've been in the school for three months and wish to be buried in the Earth by six now places the blame for this want and destruction of childhood not on Blinder but on the parents and urging on by their blind vanity and ill-considered Ace there d on learning that Paul was natural clever was more bent than ever on him being forced to CR well bricks has far on hearing convers that his son wasn't gifted was still inexurable in the same purpose in short however high and false the temperature at which the doctor kept his hot house the owners of the plant the parents were always ready to lend a helping hand in the bells and to stir the fire so D and his pride of ownership in his son is replicated the other parents in mimicry of global trading forc their plants into a tropical Zone in order to intensify their productivity and the banful effect of colonial life has already been felt by Master livstone whose temper had been made veng Venable by the solar heat of India acting on his blood but the full negative effect of colonial overheating are to be found in the hideously overripe specimen in a novel called the major now he's an adult who is's an absolute so parasite who who attaches himself to um to Dy and it's forever misguiding his judgment and he he talks about his own experience and sort of education so he boasts of his own school called experience at sound Hurst when you fellows have been roasted I wonder whether that leads back to the muffin being roasted from the beginning and then hung out of Windows by their boots and the major PLS it was the making F we were ir and it forged us so the British educational manufactur of such iron creates the sism of colonial rule with which the major is identified so whil the pupils of Dr blinders tend to wither or Die the major becomes a figure of excess although he claims that was forced Mar into such for by High hot house heat in the West Indies that he was Nam flow seems most that he's identified by Dickens rather with the ress or the rotness of override flute are always overrun by terms described as black and blue is reduced at one point to nothing but a heaving mass of indig in his native servants arms now the major forms an explicit link between the hot house of Dr Blen and that a colonial grp in place of the civilizing mission of the Black weeni by example of the native people which features s dominantly in the Imperial ideology of the time we offered an image of Thoroughly degraded corrupt being who brutalizing of others is reflected in his own transformation to into a formless mass and embodiment of that key Colonial product Indigo which was itself identified with the color of the natives now although begins as an enlightened critic of racism there are all sorts of problems in Dick's politics and you can't say that he was out there in campaigning against um Imperial polies all the time but he does show through the degraded figure of the major that is imp iial complexion how unfitted such overright beings were for Colonial wom the major sustained torment of his servant which is simply called the native he has no particular name but answer to any recuperative epithet is a disturbing image of the consequences enacted when the selfish egotism nurtured by British middle class culture is exported over the scenes the MJ's relation with a native who's frequently cursed and beaten offers the most negative image of Education the nor now just as the native is bullied by the major so the call beled inms of Dr blams are harassed by their parents so far from creating exotic flowers Dr blimber education seems to reduce his pupils to the condition of Primal slime so they did not break up vacation they simply OED away now although when they've been at school forcing system has been ruining their lives dominating their unconscious mind so we learn that toza taught Greek and Latin in his sleep and Briggs was win by his lessons as by mother yet these children nonetheless preferred staying at school to going home and this was because of the way in which they were treated by their parents when they returned home always being pressed for more and more examination study so too once's home was to be subject to examination times whilst so severe were the mental Trials of bricks that his friends always expected to see his hat floating in the ornamental pond in Kensington Gardens and an unfinished exercise lying on the back so the projection is undoubtedly comic but only because of it's in seemingly in congruous nature this wonderful image of a hat in the the pond but anti anticipates both the first discussions of child suicide in the 18 50s and the major debates on educational over pressure which were to um follow in the 1880s where the notion of child suicide through overeducation became a major preoccupation as I suggested the ideas in Don some were taken up almost immediately um one Doctor Who put took up the ideas was U someone called Robert bruten alart and he produced an absolutely wonderful entitled um essay called on the artificial production of stupidity in schools which I think should be reprinted every year in fact it was frequently reprinted in the 19th century became a sort of Bible for those wanting to change the educational system so Carter supplements the Dickens accounts of reinforcing a Dr lmus with Tales of boys age any 10 forced to work at their books until midnight and Young young men and women crippled alike in mind and body by the effects of excessive and premature St a little call is pressed out of life he dies while listening to what the waves are say so it's the gentle rhythms of nature of the waves are set against the remorseless time of the steam train and the new Industrial Age after the death of pa um Mr Dom and grief takes a journey across just a brief snippet of this wonderful depiction of charging cross louder and louder yet it shrinks and cries as it comes tearing on resistless to the grow and yet now its way still like the way of death is strewn with ashes thickly everything around is blackened there are dark pools of water muddy lanes and miserable habitations far no Jagged walls and falling houses close at hand and through the battered roofs and broken windows wretched rooms are seen where W and fever hide themselves in many wretched shapes while smoke crowded Gables and distorted chims and deformity of Bren water Penning up deformity of Mind and Body choke the murky distance there sense that in this Traina what you see is what is happening to England with the economy and this deformity of brick and water the horrible storms that been thrown out are actually both hiding and creating a deformity of mind and body so we see then a deformity also that's being shown to us it's the victims of industrial culture but it's also dby himself in his pride locked in his ideas he cannot Escape uh unable to cope with a grief that has lost his son there is education throughout this novel defin as I said was crusade in trying to educate his readers but also you find this education of all the protagonists or the the ones that uh come out on the side of Dickens has often been associated with caricature but reading the novel you'll find there are extraordinary Supple psychological betrayals in relation to Dy and his uh despised daughter Florence you see the way in which the hatred of his daughter develops and the jealousy of her first because Paul had preferred his sister to him and then later on when he thinks that she signs with his new wife against him in wonderfully subtle ways in which you can see the poisonous uh familiar relations developing in this PO uh with the the introduction of his new wife Edith who is a figure who also is a child who or a woman who never had a childhood she was forced to grow up prematurely but for her the for of prematurity was that she's hawked around all the watering places in England by her mother who's determined to enforce um a good marriage for her and in fact she's virtually sold to dony um she despises and hates her position so as you can imagine and here and tell there's a wonderfully melodramatic black plot we get Edith fleeing from donby the villain of a piece who's Torn to Pieces by a train and Don and some of course as a firm Falls but this being di the work of real moral education takes place dony learns to Value this despised da of fls and we have sentimentality here um quite unable and Tra indic but very heart teing nonetheless um and within it there is nonetheless I think real of subtlety analysis one problem I think we find if you read is um is the representation of Florence Dickens is able to describe wonderfully the way she sustains constantly all these sles and HS but Dickens is famously unable to come to terms with female sexuality so you'll find that Florence becomes a woman but she's still also a child dick cannot make her into a fully fledged sexual being and Edith Donley who is certainly a sexual being is pied that she has to be removed from the scene you cannot have her clutching at the end but Florence is always a child and a woman an innocent figure who despite all her Trails maintains her innocence she finds friends and future husband in an alternate world of childlike adults who preserve the innocence and values of a previous age so it's not the harsh mechanistic turn time of Industrial Age but she turns to The rhythms of the sea to Sailors and to s g Who a nautical instrument maker so throughout VI's novels there's a sense of anger as to how Society was treating his young the brutal both mental and physically that were visited upon these poor innocent children in the case hold on because I suggested he passed directly into the medical textbooks and became the focus of a campaign for over 70 years to change the nature of Education writing in the 1860s a leading medical figure described to kins as the most influential figure of the age although we must allow for some exaggeration here the statement does capture the extraordinary impact digging had on Victorian society and culture with a legacy as we can see which continues into our own age young up young and at the end of second childhood everybody the yes yes um it's how you get innocence um that isn't simply liit with childhood um and all the characters that he becomes connected to how levels all um childhood yeah so there a romanticization I think there oh it's very positive yes yes but it's also to think that Florence had gone through all those experiences and remain completely unmerged um I think as unlikely also um when Doby was dealing with her future husband Walter he changed his mind he was going to have him fall from grace and then decided it wouldn't quite fit the moral plot of the novel so changed it so that he he remained a CLE character yes you tell us something about how d himself brought up his own children and how dealt with their this is not a happy T he initially was he had quite a few children and he was initially as you see touring around England W to place Char but I think he lost him well he had maral difficulties you probably know and he seemed to to lose interest in in his children um was somewhat into a distance from and when they were a teenage he was very keen to send them off abroad many of them down you s it's all rather ironic it is yes yes no it's it's very um startling when you start looking at what happened to the children of the great Victorian novelists because it was the dumb thing to send your your your kids off to the colonies um but the num us who died U is this really quite been a parent well one doesn't know but um he did not have that close engagement so don't you think though that um part of the anxiety children to be educated was because life expectancy was so much shorter in those days that must have had a B I mean take somebody like s um who probably just about but uh I mean he had achieved his great achievement of acquiring Singapore deal with much about was 21 he brought his collection of tropical heart back to and he actually lived a long time later but people career much more compa yes I I think that's a very good point that you have a s because so many the children dieded in so a sense that they're both precious but you know how long you have them for so you can see the the ways in which the precious were but then they you could also see it operate in other way um because once they began to have an idea that a child should have a childhood which really is a very 19th century notion then you get the ideas of trying to protect the child and to keep it in childhood by the time you get to the end of the 19th century child has become this incredibly sentimental the garden where we would all love to go back and Jo each other so yes it's the tension between the sort of pressures in the worry how long they'll be but also then this emerging idea that children should have a childood which we have absolutely now don't we it's in shrin international Charters Etc could I ask you go to say that the hot house treatment for suff not only not be able to flish but actually that's absolutely I think what diing have messages he's not beaten or anything else but just the way in which things lay down upon him he just gives up and dies yeah thank you um what what strugg the UST talk but what struck me initially was the attitude to fls um and um of course never the third becomes daughter but um I just introduction in kindergarten which was under the Swiss yes and um that's one point the other point is the um uh continuing and I don't know if this is particular English British attitude to the education of girls I certainly in the 20th century Not Unusual all the uh resources to educate boys girls um and I just wonder whether uh kindergarten education is more much more sexual equal all um and and Pa attended to say kindergarten school different lives yeah that's interesting yes so kindergartens were introduced the first one in England was 1851 um but the the education of the young was surprisingly undifferentiated quite often middle class household and it was not until the boy was sent away to school that you got the real differentiation um and many great men scientists Etc to talk about the wonders of the education they received at their mother's KNE um and that then became um s of an impetus to looking at Women's education all inst so they can be better wives and mothers and bring up the sons of England uh but interesting when you get to the 1880s this idea of over pressure has a very interesting twist because the doctors started to argue that higher um high school education and inde University education um would be very bad for girls because the argument was that there's a set amount of energy in the body and if the energy that um should go to the reproductive system was going to the brain then women would not be able to reproduce um and this was was argued in America it argued here it was used as an argument to keep women outr but um so very interesting because although your sympathies wish to be with these people campaigning for the rights of the child and against no pression education in fact it had that s of raw sort of disturbing Underground when it came to we do com engine D in on the inside who seems to be one of the most interesting characters yeah Mr to Yes um because he but Mr Tule um the connection with Don U is that Mr tub's wife has to leave her children and going and bring up and be a wetness to um Paul whose mother dies giv him birth and throughout the novel you get this balance between the the the unfamiliar um D household on the household of the tubs where they're all tumbling and no money but but a sense of real Warth which is interesting because diin manages this at the same time as Polly the mother actually not being there so it's just a bit of slight of hand but then you get um Mr tles who um who there's a wonderful scene where he meets dby at the railway station and says to him something could I'm sorry for your loss and D is glorious that this man had presumed to think there was anything that could connect the two of them uh and this and this then proed this this nightmare Railway drive but in the end everything comes right in that one of um Mr tubal's Sons is sent to the the charitable Grinders U as a seeming um aspect of philanthropy by dony but in fact this is another form of horrific it was Mal education with a poort boy made to wear this yellow uniform stoed in the streets so kicked by the other kids but at the end you actually have this boy being rescued because he turned into a bad lot by the um the nautical side re-educated and Mr Tor is then educated by by his children um and so becomes a perfect model for what should happen um so yes he's very interesting character though again I think there is sentimentality in this notion um I've written about girls schools and contributed to lists to the P School syndrome and I wondered if in in my day my 9 years incarceration we were never told dickings but we had to meet Jan Austin and I wonder if that was supposed to be you know the wives of Empire i i in some s of I want Dickens was seen as to why we women were not allowed to e Dickens because it did confront us whereas Austin celebrates it and it is is diing now I believe so anybody know it's a great expectations has often served but that's very interesting and and I can see why because Dickens is well he's often treated as as low is not highbrow in the way of other writers but that might be another way of getting round the classes sh because certainly he was campaigning and you cannot read a dick SN without coming out full of indignation yes so you might be protected from all that it didn't work thetion was it like big brother something absolutely it was my heart leaks when I'm told my teachers I possibly teach di in school now because in 19 Century even those who couldn't read would know about the latest dicks those wonderful tales of the way which the working class areas the local leader would sit and people would come and listen um and the the tales of of when the VST stalls were reaching New York on Steam clouds of rocks the sense that right across the culture um people were reading Bens and if they weren't reading and they were listening either in the groups all Di's lectur CU he went all around the country in the end it's beli it killed him just a sheer physical effort of constantly moving around and lecturing but um yes so I would say big brother was quite a good equivalent apart from its effects um we've been treated to the most fascinating yes breadth and depth of discussion on Dickens and his effect on social history yes thank you very much indeed sure |
english_literature_lectures | The_English_Industrial_Revolution_II.txt | okay so today we're going to describe more of the Industrial Revolution and in particular though we're going to consider what the meaning of the Industrial Revolution is and so part I'm going to give you some details here because I'm going to matter about interpreting what actually is the meaning of the Industrial Revolution and the classic kind of core of the Industrial Revolution as they are very dramatic changes in five major industries textiles turns out to be the most important and it's a case where we can actually track down the people who initiated this revolution and who effectively changed the world and last time I described I would actually not put it up again because we'll leave the space last time I described John Key and the flying shuttle and then Wyatt and Paul and their attempt to mechanize spinning which ended in bankruptcy and ruin the next major inventor in the industrial revolution period is called Hargreaves 1769 and the device is the spinning jenny and this was again just a simple wooden metal machine but one that replicated the action of hand spinning and so spinning was a major occupation for women in the pre-industrial economy because it took so long to spin a pound of thread and so that's why we even get the terms like spinster because that's how single women would support themselves in this early economy so it was a major operation and it was largely it could be done by hand purely or it could be done by the spinning wheel which was medieval European innovation but there was still this limit of one thread per spinner and what hargreaves spinning machine did was to do that on a multiple scale it was to take that action of hand spinning and instead of one thread have sixteen and that's already a huge amplification in productivity but it was actually a hand machine which was designed for use in people's cottages eventually they had these machines that had something like a hundred threads each right but again if you think about the amplification of productivity it's just enormous and so this device that's why it was so revolutionary in the spinning industry right from the beginning it was going to dramatically reduce the cost of yarn in the textile industry again as I say it's a machine that would be very difficult to make money from because just like the flying shuttle it was something that was used in people's homes could be reproduced by any competent craftsman watchmaker blacksmith and would be impossible to enforce the patent rights what actually happened with Hargreaves is that it took him again it always takes a while to develop these machines he was actually forced to flee from Lancashire by machine breakers in 1768 he then attempted to patent the Machine 1769 but his patent application was denied because British patent law was very demanding and peculiar in this period and one thing is you could not have sold the machine to anyone before you applied for the payment so anyone who was ignorant of Paden procedures if you just happen to have sold the machine to someone else that invalidated all potential patent applications so you had to be quite sophisticated to use this system and anyway even if he got a patent it's not clear that he would have made any money and he died in poverty he died in obscurity in the workhouse in 1777 but he is actually one of the great creators of the modern world right and that machine immediately had dramatic impact on the costs of producing carbon textiles in the Lancashire industry and in the same year another innovator called Arkwright introduced a machine called the water frame and again these dates are always somewhat arbitrary because these machines don't just drop onto the earth in one particular day they actually take a while to develop and so we give it roughly date but the interesting thing is it's almost exactly the same year and that's why if some people want to date the Industrial Revolution it's the 1760s or 1769 would be one of the the best dates for this and Arkwright actually was the first of these innovators to actually make a lot of money what actually happened in his case was that he did successfully patent the machine he was much more sophisticated guy now there's a lot of mystery about Arkwright's role in the Industrial Revolution because his background was actually as a barber wig maker and dealer in hair and somehow this guy came up with the technical ability and the water frame what it actually consists of is another way of spinning cotton thread it's a machine that's designed for use in factories and is called the water frame because it was initially powered by water and it was actually the perfection of the earlier Lewis and Paul machine and it uses rollers for spinning so it's very different from the spinning jenny it turns out that this machine could make very good fine quality yarn that you need for the weft in weaving cloth and this machine can make the strong yarn that you need for the warp and so in the same year both of those types of thread production were actually dramatically changed this one was became the foundation than for a factory industry in textiles it was actually designed already for use in factories Arkwright as I say somehow came up with this there then followed some patent litigation against him which was promoted by cotton manufacturers and factories who didn't want to honor his patents and they got the mechanic that he had worked with to claim that the machine was in fact his and that Arkwright had actually stolen the device from him we'll never know the exact truths of this Arkwright's patents were then invalidated by the courts in 1785 and so he had somewhat limited patent protection he developed other machines as well as the water frame that also deal with the process of going from the raw cotton to the cotton yarn but he died in 1792 with a large fortune a lot of that was made by his abilities actually as an entrepreneur and organizer of this new factory system and so the interesting thing is that he's the first of these guys to actually really make money but it's not clear exactly how much of his money came from the protection of property rights in England as opposed to the fact that he was a very good businessman and a very good organiser of these new factories and he made a lot of money once he had no patents right after 1785 he made still substantial sums of money and so he is a famous figure then of this revolution and then very soon after Samuel Crompton in 1779 produced a device called the mule and it's called the mule because it's a combination of these two other machines and that actually became the basis for the spinning industry in 19th century Britain it could produce an a factory basis very high quality threads right initially it still manually operated but eventually the thing was mechanized completely and so it became very important machine what happened to him died in poverty Crompton once he produced the machine started producing this very fine thread the locals in his town of Bolton which became a center of the spinning industry in Britain immediately saw that something was going on there was a lot of curiosity about this machine and he decided that he would give the machine to the town to the local industry in return from a promise that the manufacturers would raise money to give him a prize reward he would not actually try and patent the machine they defaulted on their promise he was eventually given about five hundred pounds by subscription of manufacturers in the 1790s that's about ten times our Carpenters sorry 20 times the Carpenters wage right and this actually became as the foundation of the spinning industry in Britain and eventually in 1811 Parliament gave him a grant of five thousand pounds I say these people were responsible for increasing the output of the British economy by something like thirty or forty percent right over the course of the industrial revolution period so relative to the economic gains that came from their activity the rewards that they got were pretty small and the interesting thing here was that that reward again had to come through the political process right the patent system so the interesting is if the Industrial Revolution is really triggered by institutions the interesting thing is that you see here that the institutions in Britain are actually very poor a rewarding innovation still in this period most of these guys don't manage to make anything from these innovations they did get one thing though that's interesting is that we know their names they became famous right and that's actually it leads to one other kind of interesting speculation about the Industrial Revolution which is the most medieval innovators we don't know who did it we don't know who invented the spinning wheel right there's lots of things like spectacles and and medieval Italy we just don't know who these people are what's interesting about this society is that these people became famous I think they were in some ways minor rock stars of their time and so that even though they couldn't get money they got Fame and one question was is was the Industrial Revolution just the result of a particular type of culture in Britain in this period a culture which delighted in innovation and which was just a cultural accident that the ancient Greeks thought that this was an important trivial why would anyone be interested in that but the British and protect the people in North Britain in this period were fascinated by the possibilities of these machines and and would have even without any reward people would have done it for fun it's like the the software community now that writes programs like Linux right that they just they want to do it right and so it's kind of an interesting issue here what else happens then in the industry next innovator in the actors guy called the Reverend Edmund Cartwright who's a vicar is a priest in the Church of England in one of these textile towns in the north of England and his innovation is to introduce a power loom and that came in in 1785 and what prompted that was he set out inventing a para loom without ever having seen someone weaving he had no mechanical background he's trained in mathematics and classics but he was vicar in one of these textile towns and the revolution and spinning here had dramatically reduced the cost of thread and so it had created a bottleneck in the weaving industry and this was the period when Weaver's became enormously wealthy because cloth was much cheaper now there's a lot of demand for this cloth but it takes a while to train good Weaver's and their salaries rose accordingly and his parishioners were complaining about this bottleneck in the industry and so Cartwright said well someone should invent a para loom why not me and he acts devised that pretty much the principles of the power loom he patented the machine but again as a kind of a classic of the industry in this period the original para loom wasn't that good right it had the ideas but it needed a lot of development and so during the life of the patent the machine was a commercial failure they set up a factory in Manchester that was destroyed by machine breakers and he made actually no money from the patent but he got a grant from Parliament and his grant was ten thousand pounds so he actually did quite well in this period it's interesting that Crompton got a grant of only five thousand pounds cartwright got more because he was socially more upper class he had better political connections and he actually got his grant before Crompton who got his in 1811 and so what's interesting still is that innovation here is really still heavily dependent on government favor right and it's just you can go and appeal to Parliament and say look what I did for the country I deserve some kind of reward for this but what's interesting about him as I say is that here's a guy who essentially just got together with the village blacksmith and carpenters are bad and said here how are we going to do this and it does raise this puzzle about the Industrial Revolution which is if he could make a major innovation in the Industrial Revolution period why couldn't people do that a thousand years before or 2,000 years before right it's not that these guys have any particular talent or particular genius it just seems like now in this period everyone's interested in making these innovations and they just assume that hey we could do something right I mean it would be like one of us deciding today oh I'm gonna rebel I revolutionize the American auto industry where do I start I'll go see my local mechanic and what kind of machine we need to introduce in this period okay so Crompton sorry Cartwright comes in and then the last of the kind of great heroic Industrial Revolution innovators is a guy Richard Roberts and his device is the self acting mule that's actually introduced in 1830 and what happened to Roberts poverty what happened in his case was that by now the industry is mature enough that professional machine developers appear and so Roberts is one of the first great professional engineers in the industry but typically developing these machines now cost a lot of money and initially they're not that good and so in the life of your patent initially it's hard to make money off these things because it's not like say modern drugs we're on day one of the patent the stuff is fully effective right the typically one of these machines is it works a little bit better than the existing staff and it's going to get a lot better as people have experience with it but it's not initially going to be particularly profitable and so what happened with him is the development costs for this machine were twelve thousand pounds over the first nine years of the patents the payment would run seventeen years in Britain he made only seven thousand pounds in revenues and so he was well past halfway through the patent without even having recovered the development cost for this machine even though it became again a major machine in the industry and very important machine and so what happened to happen then is that Parliament extended his patent by seven years to try and give him some more reward he was not a very good manager of money he died in poverty in 1864 and his daughter then was granted a pension of 300 pounds a year by parliament in recognition of his services to the country and so the interesting thing about the industry is as the what are the important features it's a lot of tinkerers small-scale mechanics until this later period here when it becomes a professional industry the institutional rewards for innovation turn out to be very poor in the industry in this period these are actually the original innovators all of these machines have to be incredibly and extensively developed by people who are actually using them in factories we have the records of some of those cotton manufacturers and the interesting thing is that even though they're dramatically improving the productivity of these machines over time by at a rate of something like 2 to 3 percent per year which is unheard of sustained productivity advanced in this world they make the normal rate of return on capital they make about a 10% rate of return per year the old-fashioned sector of the industry such as hand loom weaving the the guys who organized that they make about 10% as well grocers make about 10% in Britain in this period and so what's interesting is that there's a huge amount of innovation going on but the but the major beneficiary of all of this innovation is actually the consumer because these firms are competing with each other they're producing a very standardized commodity this is not coca-cola right this is number 20 cotton yarns and in that competition what happens is if one of them figures out a way of doing it more cheaply the next door firm pretty soon catches on finds out how that's done they just drive down output prices steadily and steadily and so over the period of the Industrial Revolution they increase the productivity of cotton textile production by something like 25 fold you could get 25 times as much output per unit of input over the hundred years of the Industrial Revolution and it turns out that cotton textiles as I say explains the majority not really contact cells and then the extension to wool and linen and the other textile industries in some it explains the majority of the growth of the British economy in this period the puzzle then it creates is this kind of sudden burst of innovation is what's the meaning of the Industrial Revolution because as he the suddenness of this seems to say look something happened in 1769 right except that if you know anything about British history nothing is happening in England in 1769 it's an incremental change is going on in the social system the political system it's a very flat period right you're not going to be able to find any kind of key precipitating event and so the question that comes up is well why is this a really a sudden break in the nature of the economy or can this be regarded as just a continuation of earlier kind of technological advances that will be made over the past 600 or 700 years and the important thing about productivity growth from the economy as a whole is that the growth of efficiency in the economy as a whole is going to be the sum of the growth of efficiency in each industry in the economy multiplied by the share or value added in that industry in the economy as a whole okay and so efficiency growth actually has this very nice property again at the national level which is that it's actually that's why it's possible to say how much did each industry contribute to the overall efficiency growth of the economy in this period it's because it actually sums up nicely in this way now what that implies then is that the effect of innovation in any area in the economy is going to very heavily depend on what share of output is actually being produced in that sector of the economy if you have a dramatic innovation in an industry which there's very little consumer demand for then it can't affect very much the overall productivity growth of the economy what's important about cotton textiles in this period is that as clothing gets cheaper there's a huge demand for clothing just before coming to the lecture today and believe me I'm a person who does not spend a lot on clothing as you'll have noticed I was looking in my wardrobe and thinking how could I have so much clothing right most of this I'll never wear again right we never we can't wear out clothing now we could become tired of it long beforehand and so we have an incredible demand I mean basically in the pre-industrial period people had one or two suits of clothing per year we could still do that right two sets of clothing would be enough but because of fashion and because of style once clothing gets cheap there's this enormous demand for clothing and Americans are simply unable now to wear out clothing what happens to it it sent a goodwill most of that stuff can't be resold again in the Mauri you've got a goodwill and buy pretty good clothing for almost nothing I hate most of it now is actually shipped to Africa or to other third-world countries they just bail it up that in the heart of the Congo people addressed in your hand-me-downs right and and that happened very soon after the Industrial Revolution which was as clothing and very cheap clothing styles exploded people began to wear more and more clothing they would wear clothing which we never actually wear out for the end and what happened then was there were two possibilities here one was that this device you know clothing would become accredited cheap and people would simply say no well now I can just spend one percent of my income on clothing that didn't happen the share of income devoted to clothing expenditures remained at ten or fifteen percent right and so this remained a significant share of the economy and so this productivity growth thing could feed into the growth of output in the economy partly just because of this accident of demand there were other innovations made in the previous seven hundred years which were as dramatic as those in cotton textiles the classic one was the invention of the printing press in the 15th century if you look at the price of books over the first hundred 50 or 200 years after the introduction of the printing press those prices fell as much as cotton cloth prices fell in the industrial revolution period there's again about a 20 fold expansion of the productivity of the production of printing materials why didn't that has almost no measurable impact on the productivity and output of the economy in pre-industrial Europe why is that it's because book production is tiny and once books get cheaper the amount that's produced increases but it still remains a tiny fraction of economic output but that's an accident of the demands of people right so for example if the population of Europe in the 15th century had largely consisted of university professors who consumed a large amount of printed material then in that economy the revolution in the printing press would actually have had dramatic effects on apples and would actually have dramatically changed the measured output of the economy but because most people are illiterate still in this period and again because there isn't a lot to read I mean there's just not being a lot of production of material that people want to read it's not now like when you have Oprah magazine that had not yet been developed as a lifestyle choice in this economy and so that's a kind of an accidental feature then of the Industrial Revolution which is is it just that you finally you'd had these productivity advances in the past but what was important now was that you had an advanced in an industry where there was this potentially huge consumer demand that's one factor a second accidental factor here is that another thing that would have rapidly choked off growth in the textile industry would have been supplies of raw cotton cotton was very expensive in running into the Industrial Revolution period raw cotton prices actually also fell very substantially in the industrial revolution period it's not counted as a productivity advance in the British economy because it's all been done externally but that was a crucial factor in allowing this industry to remain as big as it was if raw cotton had remained at the original price so that in 1850s it was three times as expensive as it then was the size the context our industry would have been much much lower most of the cost of or half the cost of cotton goods in the end was the raw cotton in the goods and that would actually have reduced the amount of productivity growth in England and so an important contributing factor which actually developments going on in the Americas and in particular the in the slave system in the southern United States and also the productivity improvements that were being made in the cultivation of cotton right and so again that as you say it's a kind of accidental factor it depended on you know the discovery of the Americas the development of the u.s. south but it's important in actually creating growth in Britain in this period a third factor that also turns out to be very important is that you know another industry that could have been revolutionized in this period will say BRIC production bricks are very heavy they tend to be produced very close to where they used and that would have limited demand in this industry to just domestic demand another key feature of cotton textiles is that it's very light in relation to its weight even in the 18th century you can ship cotton textiles a long way and not add that much to the price and so the other factor that made the industry huge in Britain was that very quickly the majority of the output of the industry was actually being exported and it was important that Britain had was winning this war for the supremacy on the seas because that's what gave access to Britain to very large numbers of international markets and so that turns out there's a kind of interesting alliance between the technological advance here which created a demand for overseas markets and the extension of British political power in this period and military power which actually opened up these markets and in particular for example India became a huge market for British textile goods China became an enormous market for British textile goods if both of these countries had controlled their own political destiny they would have excluded British imports that would have actually reduced the share of output in Britain that was coming from textiles it would have reduced the productivity growth of the economy in the industrial revolution period and so you have actually this interesting alliance of the nature of demand the price elasticity of demand the supplies of raw material that are crucial for the industry and the third thing is the ability of the British to keep open overseas markets for these goods there's a fourth factor which also enters in this period as well and that fourth factor is that British population started to grow very strongly in the 1760s and over the course of the Industrial Revolution British population roughly tripled Britain by then all the land was completely occupied there was no ability to produce food to feed all of these people that meant that Britain had to export large quantities of manufactured goods in order to import food from countries like Ireland or from eventually the United States and to import also very importantly raw materials from the baltic and so again that meant that exchange rates were going to be such as to favor the exports of manufactured goods and so it's actually as I say there's a lot of things that had to kind of come together in Britain in the 1760s to produce such a huge growth of productivity from this one particular industry and so it raises this kind of puzzle about what extent was this really an accident as opposed to any kind of dramatic or systemic change in the economy another factor that you actually have to consider is the following and here we need a little bit of geography so here's England stuck on the edge of Europe here here's the continent down here and then the Baltic now the productivity growth of the British economy moved from roughly 0.1 percent to about 0.5 percent per year in the industrial revolution period that's a pretty dramatic change and as I say it's not modern productivity growth rate yet in the industrial revolution but it's a dramatic change from what occurred in the pre-industrial world but that's measuring productivity at the level of the British economy if you were to measure productivity growth at the level of Europe as a whole in the industrial revolution period it would be very significantly lower one of the reasons the productivity growth is so high in Britain is that cotton goods are being manufactured here and then exported to the rest of Europe ok so that when you measure the productivity growth in those economies you wouldn't count cotton textiles it's not a product of those economies right so that if you looked at Europe as a whole if you if you expanded this is just an arbitrary political boundary that we're using to measure productivity growth suppose I think of Britain as as we shouldn't take it in isolation it's part of a system of trading economies in Europe production of any good will tend to concentrate in a particular location right once these innovations were made in Britain they could have been exported and the production could have occurred elsewhere production remained in Britain in part because the British were very good at organizing factories right and so that how we measure that productivity it's going to be influenced very strongly by where we draw the boundaries so for example even though productivity girls tend to measure that the English level Ireland in this period is politically United with England if we include Ireland in the calculation right because a lot of these goods being exported to Ireland what would actually happen is you would reduce measured productivity growth rates in the English economy in this period down from the point 5 to 0.4 0.35 right but then again the question is well why should we just include you know Ireland or Scotland what about France what about Germany right where ever do you want to draw the boundaries now it's hard to convey in a short time you know the important the importance of that issue but another way to think about this is most of the cotton industry was concentrated in this tiny area here in Lancashire where a very large fraction of population is engaged in in cotton textile production if we instead say well look I don't think we should think about the industrial revolution is the British affair I think we should think about it more as an affair of this small area in the northwest of England we can again measure productivity growth for the county of Lancashire or for the to the northwest of England in that case the productivity growth rate in that area would be something like one and a half percent per year in the industrial revolution period you would get this incredibly sharp break from the pre-industrial world because the more you focus on where these goods are being produced then the more dramatic is the productivity growth within the economy but as I say it's not at all obvious what the appropriate level of analysis is is it where the industry happens to locate right suppose it happened to locate in the Netherlands after the Industrial Revolution would you then have said well that's where all of the productivity advances and that's where the Industrial Revolution occurred because a lot of this is just about the fact that it was still cost efficient to actually have these mills in Lancashire in this particular area and so there's an argument that can be made look Europe as a whole was likely to experience various productivity advances industries tend to concentrate once they have in advance what happened was the bridge happened to make some of these important early innovations they got ahead in this industry they captured the industry that's where it located but they were you know we should think about Europe as a whole as being the place where this could have likely happened and the reason why that's not a crazy speculation is that in France in exactly the same period as the revolution in textiles in Britain the French actually introduced a textile device that was more sophisticated and more innovative in many ways than anything that the British introduced in this period and that was the jacquard loom it took them roughly 78 years of work on this to finally perfect it hey the first idea for the jacquard loom came in 1725 he was finally produced in perfected form in 1803 by jacquard right and what does the jacquard loom do let me just create some space here so what the jacquard loom does is there's different types of weaving right and remember in all weaving you've got your warp here and then you've got to insert the weft now if you produce gray cloth you know if you produce cloth there's a lot of demand for patterned and colored cloth and there's different ways of doing that the cheapest way is just to produce the the gray cloth and then bleach it and then dye it after you produce it but there's a much more sophisticated way of producing materials where you color the threads before you assemble them and then you interweave them to form various patterns and so various people here I see you're wearing some of them were wearing shirts that are done in that way okay and it turns out that when you interweave these threads to form these patterns you can in fact put the picture of Mona Lisa on cloth if you want to just buy the appropriate choices of thread and so there was a market there was demand in this period for these very fancy elaborately patterned colored cloths right and if you were to India now for example saris are often produced in this way with very elaborate patterns they're actually from the thread and it's just the interweaving of the thread and so if you insert now we insert blue and now you insert and we started under there over the other threads and particular ways you can do almost anything right you can make any picture you want and you can make any face you want Obama's face could be produced in this type of weaving and put on these cloths right it turns out this was an area of the industry that the French were much more involved in than the British in particular because the France had a much larger market for very high quality cloth for luxury goods right in part due to the nature of the French Court the French upper classes producing these cloths was incredibly elaborate and time-consuming because the way that they were actually produced is that you have to tie off the various threads here into a bun into a string and that lifts up a set of the warp threads and then you insert the wave and then you would have another set of them tied off in another pattern and so you would have at the side of the Loom here hundreds of these different draw cords that you would have to pull in the right sequence in order to get the weft inserted correctly to produce this elaborate pattern that you're trying to produce and so it was very time-consuming it's very skilled it takes a lot of work it isn't very expensive what was the idea of the jacquard loom the idea of the loom was to automate that process and what the jacquard loom actually introduced was the punch card that was later used for computers and so in the jacquard loom these cards had a hole corresponding to each of the threads in the cloth and those holes could either be left filled or they could be punched out and then there was a device put on top of the loom where these cards ran through in sequence and each line there corresponds to a set of instructions in terms of which of these threads to pull up because there were needles that would come through if they went through it would lift the thread if they didn't the thread would stay down right and so the threads are kind of attached each to a string and then they can be pulled up or down depending on the instructions here right and so what's amazing about the jacquard loom is that conceptually it's a dramatic break right to think about how could we it you know you have to kind of twist and think well how do we go from all these strings to an automatic coating of this pattern right and once you've done it then a cloth designer can design the cloth craftsman then or carpenters can carve out these patterns and they're relatively unskilled Weaver's can run them through the machine right this actually became as a foundation of a whole other branch of the textile industry and silk weaving in India now is very important in producing the the cloth for that market it's just as dramatic as anything in Britain it turns out to have very limited impacts in terms of the productivity of French industry and the reason is that this is still very expensive and very high demand cloth there isn't a huge market for it it's just a function of the relative sizes of these markets and so an argument could be made the French were just unlucky they were smart it turns out the French did a whole bunch of other innovations in the industrial revolution period that turned out to not have a huge impact another one was the balloon was invented in this period the first parachute jump was made by the French from a balloon they invented fruit preserving they did a bunch of stuff at the same time as the as the British were doing things it just turns out that by accident they were not very good they were producing lots of Conchords in the industrial revolution period that is innovative designs that really didn't turn out to have a huge market and so again the idea comes up we'll look there were a bunch of countries competing here in the industrial revolution period could it just be accidental that the British were the ones who made the innovations that then had these very profound impacts and so shouldn't we consider the Industrial Revolution more as a European phenomena and as a European phenomena it represents a much less dramatic break from the past than the Industrial Revolution where you concentrate on the north of England and so it becomes possible to think of Europe as having had a gradually increasing rate of innovation going into this period and then various accidental events in Britain in the industrial revolution period had this magnifying effect and it was that kind of combination of British military power British population growth occurring at the same time Britain's inability to feed itself and the introduction of these new techniques which then kind of propelled this country forward and gave a mistaken impression that this was kind of on 1769 suddenly that's the the year the world changed right and we'll never actually know exactly when that change came from earlier society to later society the Industrial Revolution is potentially a much more drawn-out affair for luck an accident and geography all of these things play some role in the process okay so next time I will say just a little bit more about the other innovations just on the same line here and then go on and talk about the consequences of the Industrial Revolution you |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_5_Rademacher_complexity_empirical_Rademacher_complexity.txt | So I guess, yeah, sorry for the delay a little bit. I couldn't find water somehow. Anyway, so but OK, let's get started. So last time we talked about concentration equality, which was for some preparations for what we need today or maybe the next lecture. And today, we are going to go back to the uniform convergence. So recall that our goal was to do a uniform convergence. And we have proved some results. For example, we have proved that, I guess, the first thing we had is that we say the excess risk is bounded by this uniform convergence. So we basically care about something like the sup of the differences. So guys, we have shown that the excess risk-- I don't know why I'm-- something wrong with my pen? Excess risk, this is bounded by, for example, something like L h star minus L hat h star plus sup L h minus L hat h, where h is capital H. And we have used this to get a certain kind of uniform convergence result. For example, we have shown that for finite hypothesis class, we've got L h hat minus L of h star. This is bounded by-- I guess we have shown this technically. But this can be turned into excess risk bound. We have shown this is less than-- sorry. This is less than square root ln h, roughly speaking over n, if you ignore other log factors and you can take sup over h and capital H. And also, for hypothesis class parameterized by class with p parameters, we also have got something like L theta hat minus L hat. Sorry, I think there's something wrong with my note. Let me take a quick note here. So we can get L sup theta in capital theta, L theta minus L hat theta. This is bounded by something like O tilde square p over n. This is what we did two lectures ago. So and you can think of this, I guess we have discussed this briefly. So this quantity, and this quantity, these are in some sense complex dimensions of the hypothesis class. And this is generally the type of results that we're going to get. So we're going to have something that decreases as n goes to infinity. And also, there is another constant, which is, there's another factor, which is the hypothesis class, right? So basically, eventually you can say that if n is bigger than the hypothesis class, then you can get non-trivial error bound. So the problem with these two bound is the following. So the limitation I think you can talk about different limitations from different perspectives. But I think the basic limitation of this p parameter bound here is that it requires n to be much bigger than p so that this bound is small. So and this is not necessarily feasible in many cases. And also, this is not really what happens in reality, right? So in reality, in many cases, n is smaller than p is quite often. It's not always the case, but it's pretty often. It's more often in a modern situation where you have a so-called overparameterized neural network. By overparameterized I guess I would define that more carefully in the later course, in the later lectures. But basically, in the modern setting when you have a deep neural network, your ImageNet has a million examples. But your parameters could be something like 10 million, or maybe 100 million, sometimes could be billions. So of course, this is not necessarily always the case. Where sometimes you still have n is bigger than p depending on the situation. But generally, people found that it's useful to make your network, your p very large. So definitely it's not the case that you want n to be much, much bigger than p. That's definitely not true. So and the reason is sometimes why this is not capturing what happens in reality is that this is not precise enough, right? So not precise enough in the sense that your complexity measure is, in some sense, to worst case. Your complex measure is measuring the complexity of all possible parameters with all possible models with p parameters. But you are not specializing enough to some special kind of models among all the models with p parameters. For example, you cannot achieve your case, especially in the more classical language, you cannot distinguish a sparse parameter from a dense parameter. You cannot distinguish, for example, you have a parameter class where theta has not one norm bound versus some hypothesis class where theta has some two norm bound, right? So in either of these cases, you get the parameter p, the p will be showing up in your bound. So and but not necessarily the b, the control of the norm of the parameters. So that's why we are looking for something that can be more precise that can not depend on p but depend on some more accurate characterization of the complexity. So today and next lecture, and next few lectures in some sense, so what we are going to say, our goal is to prove something like L theta hat minus L hat theta hat is less than something like some complexity of theta and over n. And but this complexity measure could be more fine-grained than just a single number p. And this complex measure could possibly also depend on distribution. So this complexity may depend even on distribution, distribution p. p is the distribution of your data. So maybe for some distribution p your complexity is smaller. For some other distribution p your complexity is higher. And we are trying to capture the intrinsic kind of difficulty of the learning problem. But of course, this is somewhat subjective. Because this is depends a little bit on what you believe that is happening in the real life, right? So if you believe that the real parameter is sparse, then you probably should have a complex measure that captures the L1 norm of the parameters. If you believe that the ground truth parameters have other properties, then you probably should use a different complexity measure. So but this is the general goal. And also, the practical way of thinking about this is that you can think of the right-hand side as something that motivates your regularization. So the practical impact, I guess, maybe, or the practical implication is that you can use this complexity of theta as a regularization. Because if you just optimize your model where you're going to find some parameter theta, especially if you don't have enough theta, you may have multiple global minimum among the search space. So but if you know that certain complexity measure will make the bound better, then you can actively find models with small complexity. So you can use this complex measure on the right-hand side. And you add this complexity measure multiplied by lambda to your tuning loss. So you get a regular loss. So that you are more likely to find a small complexity one which generalizes better. So I guess that's the basic idea. And so, what we're going to do is that-- so we're going to-- today, the first part is we're going to talk about-- a week ago, before we were talking about the sup-- the uniform convergence. So this is our tool where you want to prove that L h minus L hat h is small for all possible h. And in the first part of the lecture, we're going to bound the expectation of this as some kind of a weaker goal. And in the second part of the lecture, if we have time, we're going to bound it with high probability without expectation in front of it. And what's the expectation comes from, this randomness come from the data and the training data. Because L hat depends on the training data. And the training data are randomly being drawn. So that's why what's inside the expectation is a random variable that depends on the randomness of the training data. And you would take expectations of these random variables. And that's the goal. So we're going to bound, upper bound this with some other quantities that we think are more intrinsic and convenient for us to use. So I guess I need to start with some definitions. So this is a definition called Rademacher complexity, which is the main object we're going to focus on in this lecture. So definition is, let f be a family of real valued functions. So far, in this definition f is just an abstract family of functions. And we're going to define a complexity for this family of functions f. And then, we're going to say what functions of f we are going to care about. We care about actually the functions of the losses, the family of the losses. But for now, f is just the abstract family of functions. And you're going to define a complex measure for this abstract family of functions. So let's say this family of functions that maps some input space, let's call it Z to real number. And let P be a distribution over this input space Z. Then the average, the so-called-- often you don't really necessarily have to specify average Rademacher complexity. But technically, it's the average Rademacher complexity of f is defined as the following. So this is R and then sub f, where n indicates how many examples you have, how many training examples, how many empirical examples you have. Rnf is defined to be you first draw some examples. You can think of this as training examples, iid from the distribution p. And then you draw some so-called Rademacher random variables. Recall that Rademacher random variables are just binary plus 1 minus 1 uniform. You draw sigma 1 up to sigma n, iid uniformly from minus 1, 1. And then you look at this quantity. You look at the sup over this function class capital F. And you look at the quantity the average of sigma i f Zi and from 1 to n. So this sounds like kind of a pretty complicated definition. But let me try to interpret a little bit. So in some sense, maybe first of all, only think about this quantity right here. Just think about what's inside this sup. So this is the correlation. 1 over n is just a normalization, which is not important. This is the correlation between the outputs of f. So the output of f is fZ1 up to fZn, and some random variable sigma 1 and sigma n. Of course, if you just look at this, a correlation should be typically very close to 0 because you shouldn't correlate with random variables. But there is a sup, right? So you are first joining the sigma 1 up to sigma n, and then you take a sup over f. So basically, you are saying that what's the maximal-- so basically, this whole thing is the maximal correlation between f, the output of f, and sigma 1 up to sigma m after you draw sigma i. So you first draw sigma i, and then you try to find something that correlates with sigma i's. Or you try to find f such that it can output something that looks like the random things they have drawn. So in some sense, if you have a high complexity, means that for most or for almost all-- for most the binary patterns, binary patterns just means that you have the sigma 1 up to sigma n, there exist f in this hypothesis class such that the output on this family is similar to or similar or correlate with a random pattern. So for any random pattern, if you draw it, then you can find post hoc a function f in this family class such that the output on Z1 up to Zn looks like the random patterns you have drawn. So in some sense, this is saying that how diverse the outputs you can have from this family of functions f. So if this family of functions f can map your Z1 up to Zn into any possible patterns, then this Rademacher complexity will be the largest. So for example, suppose every binary pattern can be somewhat output by this family of functions f on Z, then you get the maximum Rademacher complexity intuitively. Any questions so far? Is this necessarily a non-increasing function of n? Is this necessarily a non-increasing function of n? The question is, is this necessarily a non-increasing function of n? I think it should be. But it shouldn't be-- but I don't think it's trivial to see why it is non-increasing. At least off the top of my head, I don't see a super simple argument. But I think you can prove it without too much effort. I think roughly speaking, how do you prove it is that you can-- because you take the sup, right, so you somehow can-- I think you can prove it by switching the sup with the expectation for one-- for example, the last example. And then you've got the roughly speaking the definition of the n minus 1 version of the Rademacher complexity. But maybe I may not do it on the fly, just in case I missed something, I got stuck. So but roughly speaking, I think that should work. Any other questions? By the way, I never got any questions from Zoom. So you should feel free to speak up. Just unmute yourself and ask questions. Sometimes I'm not even sure whether the Zoom is working. Should f be mapped from minus 1 to 1? Otherwise it would have [INAUDIBLE]?? That's a great question. So f is not required to be mapped to plus 1 minus 1. And it's true that this can be unbounded, right? So this is actually sensitive to the scale of f. If you scale f by a factor of 2, then you're going to have 2 times the Rademacher complexity. And this is actually somewhat useful in certain cases, which we probably will talk about later. There is a question? Cool. So now, let's see why we all care about this. So why do we care about this Rademacher complexity? The reason is that you know the following. So you know that-- let me write down what is the theorem. Suppose you did this hypothetical experiment. You draw n examples from distribution P. And then you look at this quantity, the error rate of f of Zi, i from 1 to n, minus the expectation of fZ. This is the quantity we have dealt with in the last lecture, the concentration how much you deviate from a min. But you take a sup here because you sometimes care about the maximum possible deviation post-hoc after you draw the examples. If you look at this quantity, then this quantity is bounded by 2 times the Rademacher complexity of i. I guess, to appreciate what the theorem is really doing, I guess, it's probably time to say what exactly what kind of f we care about. So f so far is abstract thing. But now let's try to instantiate. So suppose you take f, the capital F to be the family of functions that maps Z, which is taken to be a pair of x and y, the input and output. And you map it to the loss function, the loss of x, y on the hypothesis h for any h in the hypothesis class. So basically, this is the family of losses. For every model, every model is a function, right? And given a model, and you get a loss function defined by the model h. So basically, this is the composition of the model function with the loss function-- the two-dimensional loss function, the little l. So together, basically you get the-- this is mapped from the data point to the loss of the data point. But you can vary what functions, what models you care about. So you get a family of losses. So in some sense, it's just a slight extension of the family of models in some sense. But here it's about the losses. And suppose you care about this. You take f to be this. And you can see that the left-hand side is exactly what we was trying to bound, right? Just because fZi is loss of xi, guess we write xi, yi like this, xi, yi at h, right? So then, the sum of-- the empirical sum is just the empirical loss, right? So 1 over n, sum of fZi, this is just 1 over n times the sum of L xi, yi h. This is just the loss-- the empirical loss of the hypothesis class-- the hypothesis h, right? And the expectation fZi, fZ is the expectation of the loss. And where x and y are drawn from the distribution p. And this becomes the population loss. So that's why the left-hand side of this theorem is really just the sup over h, L hat h minus Lh. Something like this. A quick question? And you take expectation of the randomness of the data. So that's the weaker version of uniform convergence that we outline in the beginning of the lecture. So and you can bound this by the Rademacher complexity of this function class f, the Rademacher complexity of this family of losses. So basically, the theorem is saying that the generalization error is less than the Rademacher complexity of f. I think technical expectation of the generalization. And there was a question here? Is there a sup of the absolute value of the difference? No, there is no absolute value here. Okay. Yes. There is no absolute value? There is no absolute value. That's a great question. So there is no absolute value. And it becomes a little bit trickier if you add your absolute value. I think, if you add absolute value, first of all, you need a different proof-- a slightly different proof. And second, you're going to have a different constant. Instead of 2, you can get probably 4. And the cleanest way to do it is that you don't do absolute value in this theorem. You do the absolute value in the outer layer. Actually, you don't even need absolute value actually technically. Because eventually, you only cover one side of the bound when you do the generalization error. So technically, you don't even need absolute value anywhere. OK? So and if you really think about this R and f in this context, right, so for this particular f, what does it mean? It really means that how well the family of losses, so the losses of data, n data can correlate with random pattern. So this is still sounds a little bit not super intuitive. We can further-- for simplified case, we can further simplify this a little bit. So suppose you have a binary classification. So suppose, let's say, y is between plus 1 and minus 1. An L is 0, 1 loss. So L of xyh is equals to the indicator of h of x is not equal to y. If they are not equal, you have loss 1. Otherwise you have loss 0. And in this case, we can further interpret this a little bit more. So what you can do is the following. First of all, you will write this indicator into this form. We write it as 1/2 times y minus 1 times h of x. This is assuming-- here I'm assuming h of x is also n plus 1 minus y. So by the way, what I'm doing here is to try to instantiate this into a special case so that you can interpret the Rademacher complexity in a more intuitive way. And also, this whole thing is also useful by itself as well. So when h of x is plus 1 minus 1, then y is plus 1 minus, and also y is plus 1 minus 1, then the indicator that they are different, you can write this as this. Because if y and h are different, then you get yhx is minus 1. And then the whole thing will be 1. And if y and hx are the same, then y times x is 1. And then this quantity is 0. So you can just verify it. So the reason we do this is we somehow make it more linear in y and hx. And then, you can look at the Rademacher complexity. So the Rnf is this expectation of sup sigma I. So and let's plug in the loss. So here are the expectations. So in the definition, I have two expectations. So but now, I put two of the expectations into one, just you merge them. So that the randomness come from both the data and the Rademacher pattern. And you get sup over h and h. So you plug in this formula 1/2 times 1 minus yi hxi times sigma i. And now, let's do some rearrangements. It's a very simple rearrangement. Plus 1 over n times 1/2 sigma i. So here, this quantity it's inside the sup. But actually, it's a constant that doesn't depend on h. So you can put it outside of the sup. So you can technically write this, just because this sum of sigma i is a constant. And then, because now you can switch the expectation with the sum and get expectation sup of the first term and plus the expectation of this 1 over 2n sum of sigma i. And this term becomes 0. Oops. And this thing becomes 0 because the expectation of the Rademacher variable is 0. And so, then we're only left with the first quantity. And if you look at the first quantity, then you realize that-- so sigma i is a random variable. So h yi sigma i has the same distribution, sigma i, right? No matter what value you express. So for even for y is 1 or for y is minus 1, they will have the exact same distribution. So that's why you can replace yi sigma-- actually, you can also have minus here. That's still true. Because you-- and then we randomly flip the sign. So basically, that means you can replace yi sigma-- minus y sigma i by sigma itself. And still you don't change the expectation. So you can replace this by-- maybe let's define this-- I guess you can say technically-- the easiest way to check this-- I saw some confusion. The easiest way to check this, you just define sigma i prime to be minus y sigma I. Then you get sup 1 over n sum of hxi sigma i prime. But still, sigma prime distribution is still plus 1 minus 1 uniform and independent, right? So sigma i prime has the same distribution as sigma i. So you can just write this same way as this sigma i. OK? So what we have achieved here? What we have achieved here is that this seems to be a strictly simpler quantity than before. Why? This is basically the Rademacher complexity of the hypothesis class h, but not the family of losses, right? Before we were talking about a hypothesis class of the family of losses. And now you're talking about exactly the Rademacher complexity of the hypothesis class h. So basically, this is saying that for binary-- I think I'm missing something. I'm missing 1/2 here. Where did the 1/2 go? Yeah, I think I lost the 1/2. Sorry. I think I lost-- oh, I have the 1/2 in the notes. It's just I forgot to copy it. So 1/2. So basically, it's the 1/2 times the Rademacher complexity. So what we achieved is that the Rademacher complexity of f in this special case of binary classification and 0, 1 loss is equal to 1/2 times the Rademacher complexity of the hypothesis class. So that's a slightly simpler way of thinking about this. Because what's this? This is basically saying that how well h can memorize the random label. You can think of sigma 1 up to sigma n as random label. And R and h is big when you can-- there exist an h in the capital H such that h of xi is equals to sigma i. This is the best situation, right? This has the strongest correlation. So basically, if you can memorize all the random label with some hypothesis, hypothesis class, that means your Rademacher complexity is the biggest. And that gives you the worst generalization bound. And vice versa, if you cannot memorize, then you get better generalization bound. Right. OK. I have a question [INAUDIBLE]. So [INAUDIBLE] yi [INAUDIBLE]. But [INAUDIBLE]? I see. That's a good question. Let me repeat the question. So the question is, sigma prime is equal to y times-- actually, there's a minus there. So but it doesn't matter. So sigma prime is minus y times sigma i. But yi itself is random variable. So can we still claim that sigma prime has the same distribution as sigma i? So that's, indeed, that's a good question. So technically, I think what you should do is the following. So if you are really careful about this. So there are two randomness, right? So one is from the x and one is from a sigma. So you first condition the randomness of xi and say that-- so in the first expectation, so basically-- how do I say this? So you can write this as the following. So the conditional xi, yi. And then you look at the randomness of sigma i, right? So now, after you condition xi, then this is absolutely clear, right? So for any choice of yi, sigma i and sigma prime has the same distribution, conditioned on any choice of deterministic choice y. So then, so you do it inside. And then y is gone in your formula. So then you don't have to care about the outside. Make sense? Cool. So sounds good. And let me-- so the take-home-away here is that the Rademacher complexity of f is similar to the Rademacher complexity of the model. And the Rademacher complexity of the model is basically saying how well we can memorize random label, right? But there is a small caveat here, which is that this relationship is not always true. This relationship is true, exactly true for binary classifications 0, 1 loss. But it's not even true for, for example, some other loss function. So I think the intuition largely is still correct. But you cannot take this literally or rigorously, like religiously for every situation. And in some cases, actually, there could be a confusion. Because there could be cases where these two are mismatched, especially if your loss function can do something different. For example, loss function could change the binary number to real number, or the loss function has other kind of properties. So for example, the loss function is often nonlinear, so suppose you take exponential loss. So and actually, they are in some sense, in the past, they were this-- in some extreme cases, some papers actually misinterpreted this, in some sense. I guess I'm just giving you a warning in some sense. Don't always apply this every time without even thinking about it. The intuition is roughly true, but it's not exactly true at all time. I guess, there will be a place where I'm going to mention this again later in the lecture, in some of the later lectures. Can I ask one question? And by the way, just to-- and what we will do next is that we are going to prove the theorem. And just a small overview for what we will do next lecture. So in this lecture, we are going to deal with this abstract measure, the Rademacher complexity, right? And you may wonder-- probably some of you are wondering why Rademacher complexity is something measurable, something that is useful. So we don't answer that today. We are going to answer that in the next few lectures. So today we are just introducing this Rademacher complexity and say, this is bounding doing it from convergence. And this Rademacher complexity is something intuitive, I hope you find that, right? So it's talking about how well you can memorize labels. So it's something at least makes sense. And in the next few lectures, we are going to instantiate this to more concrete models where you can bound the Rademacher complexity by something more concrete in the next two lectures. I got some-- oh. Did somebody ask a question? I didn't hear. Yeah, I think a couple of people chimed in. You answered my question in the meantime. But someone else might have a question still. Yeah. Sorry. I forgot to open my-- have volume on. Yeah. Please ask questions if you-- yeah. Now it's working. All right. Thank you. Oh, actually, there is a question. What is the connection between the Rademacher complexity and a degree of freedom? I think, I assume by the degree of freedom you mean the number of parameters, right? So I guess that's kind of like what we motivated in the beginning. So using this Rademacher complexity, we will be able to prove more precise bound than on the number of parameters. So probably so far, you haven't seen that. I don't expect you to see that. But in the next lecture, we're going to see you can prove better bounds that depends on something more fine-grained than the number of parameters. I hope that answers the question. But please feel free to unmute yourself and ask any follow-ups. I have a, I guess, conceptual question. How do you generally think about the distinction between the hypothesis, the family of hypotheses, versus the family of losses over them? To me-- because they have the same cardinality, right? They seem like a direct mapping between 1/2. How do you distinguish, I guess, in your mind between those two? How do you think about them? Yeah, that's a great question. So in my mind, they are very similar, except that except that I think this will be a little more explicit in the next lecture or maybe two lectures later. So except that when you talk about the models, the models oftentimes output the real number. So for example, if you think about logistic regression, the models output the logits, which could be anywhere on the real line. And they would turn that into a probability and then use that probability to compute the loss. And the loss becomes something, first of all, non-active. And often, sometimes the loss is reasonably-- it's between 0 and 1. For logistic loss it's not between 0 and 1. But I think the most interesting regime is that it's somewhat small. So it's between 0 and 1. And if you care about classification loss, then there's literally between 0 and 1. So the loss function sometimes has a scale in some sense. It's something on the order of 1. But your model could be sometimes outputting some bigger numbers. So there is a conversion there, which will be more explicit in a future lectures. But besides-- beyond that, typically I don't distinguish them very much. Gotcha. Yeah. I found it interesting that in the example, at least for binary classification, the complexity of the loss family was less than or half of the complexity of the model family. Is it common that your complexity goes down when you compose it with the loss function? I think it's common that they are related. We will see that in many cases, they can be related, but I think I wouldn't read too much from that constant of 1/2. Because this 1/2 does depend on how you define on your labels. For example, if your label is 0, 1, I think you wouldn't see the 1/2. So there are some small artifacts there so the constant. So it doesn't really matter that much. Sure. OK. Cool. OK. Let's continue. So we're going to prove this. And this is called symmetrization technique, the proof. And this is a technique that can be used in many other cases. Not necessary in this course, but in other areas of probability, let's say. So summarization technique, I think it's probably comes from those kind of risk probability anyways, in the first place. So the technique is that-- let's write down what we care about. So what we care about is the sup-- I mean, I'll take expectation for now, just so that it's a little bit cleaner. We will take expectation in a bit. So this is not symmetric in some sense. Because you have this subtraction here. And so, these two terms don't look the same. That's what I mean by not symmetric. So there's a way to make them somehow more symmetric. So what you do is that if you fix-- for now, let's say we fixed x1, Z1 up to Z1. And then, we let Z1 prime and Zealand prime to be a different draw, another draw from the distribution p, and iid. So you draw a sequence of copies of Z1 up to Zn iid. And then, what you can do is that you can say, convert this second, the expectation, this quantity, using the Zi prime. Just because by definition, all the Zi's have the same distribution from P. So expectation of f is really the same as you look at expectation of sum of f of Zi prime. Because each of these term has expectations the same as e f. And you average them. So you got e of f. So and you see that this already makes it a little bit more symmetric for whatever, just on the surface looks more symmetric. Because this is a sum of things, and this is sum of things. Of course, it's still a little bit different because the expectation is in front of this thing. But there is no expectation in front of the first. So what you can do is you can, of course, one thing you can do is you can put the expectation in front of the poster, which is not doing really anything. Because for now, Zi is constant and Zi prime is random. So in some sense, you are just putting some constant inside expectation. And now, what happens is that you can switch the expectation with the sup. Maybe ask the question first. If you [INAUDIBLE] the sum between [INAUDIBLE].. Oh, sure. Yes, sorry. It's this one? Yeah. Cool. Thanks. Yeah. So now we'll make it more symmetric. So we'll switch the expectation with the sup. So I'm claiming that if you switch them you get an inequality. Zn prime sup. So why this is true? And this is just a very generic inequality which claims that you can switch sup and expectation get inequality. So generically, the claim is that suppose you have a function g that takes in two variable. And suppose you are taking expectation first over the randomness of the first variable-- the second variable. And then you take sup over the first variable. Suppose this is a quantity you have. Then you can replace this by eventually by first taking expectation and then-- actually, by first taking the sup and then take the expectation. Because when we do the math, you are doing it from the right-hand side to the left-hand side. And why this is true, this is because you can have an intermediate step, which is you take sup over u, take expectation over v. And you bound this guv by sup over u guv. Maybe let's call it u prime. So this inequality is very simple. It's just because this term is term-wise bounded by the sup. And once you do the sup, then you see that this whole thing doesn't depend on i anymore, right? So maybe I should have another step. So actually, I'm claiming that this is just equal to this. Because this term it doesn't depend on u. You already got rid of u, right? So the sup over u just can be gone. And then, the green term is equal to the term below. Just because you change the variable name u to u prime, that's nothing. So that's why it's equality. So in general, it's just probably useful to know this as a fact so you can switch the sup with expectation and get inequality. Yes, sometimes I do it. I don't remember which direction of the inequality is. So that's why you want to probably somewhat know how to prove it, so that in case you got confused which direction it is you can still recover it. OK. Cool. OK. So that's how this works. And now, if you take expectation over Z again, we're already conditioned on Z. But now suppose let's take the expectation over Z. Then you see that this is very symmetric. So what we got is that you take expectation over Z1 up to Zn. And it's bounded by-- now you have two expectations here. One is over Zi's, and the other is over Zi primes. And then you have sup. Let's put it into a single sum, by the way-- minus f Zi prime. OK. So now it becomes a little more symmetric. And I'll do some-- one more thing to make it even more symmetric. So this one, it's symmetric in a sense that actually is mean 0 random variable. It's not even mean zero. But actually, in terms of distribution, it's the distribution is symmetric in the following sense. So fZi minus fZi prime has the same distribution as fZi prime minus fZi. Because these two things are just the renaming of each other in some sense. So they have the same distribution. Or in other words, this has the same distribution as sigma i, fZi minus fZi prime for any sigma i that is binary. If it's minus 1, if it's plus 1 it's the same thing. If it's minus one, it's just to flip the order. So could that means that you can for free introduce this random variable sigma i and not change anything? So that means that if you introduce this Rademacher random variable, and you take expectation over sigma i's, you multiply the sigma i. This fZi is fZi prime. So this is still inequality. Actually, here even you choose any sigma i, this is equality. Of course, if you choose random sigma i's and average them, it's still equality. OK? So I think, technically, you claim that for any sigma i, this is equality. Technically the first step is this is equality for every sigma i, for any sigma i. And then you say that even you take another expectation over sigma i's, this is still true. This is still true. And then you can switch the expectation whatever you want. And I'm going to switch it, just because it's a little bit convenient for me to do that. OK. So now, what I'm going to do is that I'm going to break this into two sums. So I'm going to have expectations. Here you have all the randomness. This is just a simplification of notation. So I'm claiming that this is less than sup of the first term plus the sup of the second term. And here, what we are doing is essentially exactly the same thing as the switch of the expectation-- the swap of the expectation and the sup. But here, we only have two terms. So it's a swap of sum and sup. So here, we are doing something like sup of two terms. So something like a function of f plus another function of f over f. And then, you can say that this is less than sup of f, u of f, sup u of f. Yeah. I guess you can prove this almost the same way as we have done with the expectations. But just you need one step in the middle. I will use this as an exercise for you. So and now, probably have seen that we are getting closer and closer to the definition of the Rademacher complexity. The only thing is that we have two terms, and the Rademacher complexity have just the only of this. So now, we can change this to-- we can swap the expectation with the sum. So you get sup Zi. And here, and plus expectation sup minus sigma i, fZi prime. And here the randomness is Z1 up to Zn. And sigma went up to sigma n. So this is exactly-- the first time it's exactly Rademacher complexity. And I'm going to claim that the second term is also exactly Rademacher complexity. Because here, my randomness is Zi prime up to Zn prime, and sigma 1 up to sigma n. But again, because minus sigma i fZi prime has the same distribution as sigma i fZi, because minus sigma i has the same distribution as sigma i, and Zi prime has the same distribution as Zi. So then the second term is still this is equal to the first one. So basically this is just equal to 2 times this. Plus 2 times the Rademacher complexity of f. Any questions? So it kind of feels like we just did algebra for a bunch of lines? Is that [INAUDIBLE]? That's a great question. And that's exactly what I'm going to remark on. So the question was, if I phrased it slightly differently. So what we have really done here, did we do anything powerful, or did we do something-- because the left-hand side has a sup. The right-hand side still has a sup. So we do something really useful or did we just do a bunch of algebra, right? So I'm going to claim that we did do something useful. And the reason is that the left-hand side is something like a sup is what we care about, the difference between the empirical mean and the population mean. So on the right-hand side, roughly speaking, the most important thing is this sigma. So what we have achieved here? So one, we have achieved that we remove the Ef, right? So we get rid of the Ef. And it's probably not super clear why we should appreciate this fact that we got rid of the ef at the first sight. But I can say that this ef is somewhat annoying because you don't have a good control on it, right? So when you look at this, in some sense this quality doesn't depend on the relative-- for example, this quantity doesn't depend on Ef. So if you shift it a little bit, it wouldn't change. Actually, we are going to claim that this on the right hand side is translation environment. So in some sense, you move the-- remove the translation invariant part. So maybe let me just claim the right-hand side is translation environment. Or maybe, the Rademacher complexity. I'm going to claim that-- prove this in a moment. Let me see whether I plan to do this in today's lecture. I think I didn't plan to do it today's lecture. But this is a claim. So in some sense, you remove the translation invariant part. You remove the Ef, right, so which is useful in many cases. And second, you sometimes introduce use more randomness, sigma 1 up to sigma n. So why introducing this randomness is useful? It's probably still unclear right now. But eventually, what we can do is that we are going to have-- so currently what we really have is we have expectations of this, right? You also have the expectation of this. And here, the randomness is Z1 up to Zn and sigma 1 up to sigma n, right? So we will use additional randomness. And this allows us-- this will allows us to drop the randomness from Z1 up to Zn. This will be something we'll see, I guess, probably in the next lecture. So eventually, you don't have to care about the-- you don't have to take expectation over Zi up to Zn. You can claim with high probability. So the right-hand side wouldn't have to run-- Zi up to Zn, you don't need to take expectation over Zi up to Zn. The only randomness come from the sigma i's, which I guess probably you don't see exactly what I mean. But if eventually, you only care about the randomness of sigma 1 up to sigma i, and sigma 1 up to sigma n, that seems to be a benefit. Because this is very much simpler randomness. So sigma up to sigma n has a very simple distribution. They are just Rademacher random variables. So they are much less complicated than the distribution of Z1 up to Zn, which is something you don't know. You just assume there's a distribution P. But you didn't really know any other properties about it. So I think that's the second benefit. But of course, the limitation is that we still have the sup, which is still a problem. But I think you probably wouldn't-- shouldn't expect that you can remove the sup on this level. When you have abstract family class, you probably shouldn't expect you can remove the sup completely. So it should be the next level where you remove the sup when you have a concrete hypothesis class. Cool. Let me just drink something. So the next part is another useful property or useful thing to know about Rademacher complexity, which is that this Rademacher complexity can depend on the distribution p. It still can depend on distribution p, even though our goal is to try to use the new randomness, deal with the simpler randomness, right? Why this is the case? This is just because in this definition of Rademacher complexity, you do have to draw some z1 up to Zn from the distribution p, right? So this is extreme example where you can see that where p is a point max. So let's say Z is always equal to Z0 almost [INAUDIBLE].. So whatever how you draw it, you always just draw a single point. And in this case, actually, you can have a good Rademacher complexity bound for any bounded function. So suppose, let's say minus 1. Suppose you care about [AUDIO OUT] f. And this is the only constraint on the family f. So basically, you care about f-- or maybe more-- let's say f is the family of functions f such that fZ0 is bounded by 1. So we just have a bounded family of functions. You don't even have any parametric form. Still you can prove that the Rademacher complexity of this family is small. So you can say that look at sup. So this is what? Because fZi is always the same. So this is literally equals to 1 over n times fZ0 times sum of sigma i. And because fZ0 is just a constant, it doesn't depend on what f is, because Z0 is-- wait, sorry. My bad. I'm wrong with that. Let's see. So fZ0 still depends on Z0, right? But fZ0 is bounded between 0 and 1, or between minus 1 and 1. So that means that this is less than or equal to expectation if you just bound this fZ0 by 1, you got 1 over n times sum of sigma i absolute value. So and this is actually-- then use Cauchy-Schwarz or use the-- I think this is called Cauchy-Schwarz. So the expectation of this random variable is smaller than the expectation of the square of the random variable to the power 1/2. And then you can-- actually, these derivations, we're going to see these kind of derivation several times. So you get a 1/2 expectation sum of sigma i sigma j. i is not equal to j, plus sum of sigma i squared, times 1 over n square. I'm just expanding it. So you get 1 over n square. Sigma sigma j just means 0. So you get sum over i from 1 to n expectation sigma i squared 1/2. So this is-- each of this is 1. You take the sum, you get n. So you get 1 over n to the power 1/2 is 1 over square root. So in some sense, this is kind of interesting, right? But for very, very large family of functions, without even parametric form, you can still have a good Rademacher complexity. And the reason is that the distribution is so simple. So in some sense, this is an indicator that the Rademacher complexity can capture something about the distribution. If a distribution is extremely simple, then Rademacher complexity can capture that and tell you that it's very easy to generalize. So basically, any f on a very simple distribution should be considered as very simple. Even though this-- in some sense, this family of f is just-- you have basically no assumptions on f in some sense. There is no parametric form, it's a very large family of functions. But with respect to the distribution, a simple distribution, it should be considered as simple. And this is what Rademacher complexity can tell you. So that's saying that Rademacher complexity can take into account the distribution P. But how much it can take into account of distribution P, that's a question mark. So in many of the other analysis, you don't have this property. You don't really use too much about the solution P in many of the concrete bounds for Rademacher complexity. But in principle, it can capture something about P. I have 15 minutes. I think there is time for me to do this next part. Let me see. Here, I think I have time to do this. OK. So the next part, if there's any-- no questions-- [INAUDIBLE] What if you know that [INAUDIBLE]?? So your question is whether, for example, when the features, the x, the coordinates of x have correlations, or maybe have independence? Yeah. Independence is probably more like a simplistic thing, right? So can you get a better bound from Rademacher complexity? I think this-- to answer this question, we need to zoom in to concrete settings. For linear models, I guess you will see that you would get bound-- at least if you compare to extreme cases, in one case all the coordinates are correlated. And the other case is that-- actually, it's unclear. So because if all the coordinates are correlated, actually you probably should have a better bound. Because if, for example, in the very extreme case, where all the coordinates are the same, then you effective have a one-dimensional problem. So you should have a better bound. So it does depend on the particular situation, I think. So yeah. So it's interesting. It's not clear that independence means really simpler. Independence could mean that it's more complicated. Just because we have independent input distribution. So you have a diverse set of distributions, diverse set of data. It might be even more-- it's harder to generalize in some cases. So for example, here in this one point mass case, where you have a very narrow family of data. It means you can generalize easier because you can just memorize that Z is 0. So independence might make it harder. So the next 15 to 10-- 10 to 15 minutes, let me try to define this so-called empirical Rademacher complexity. And the goal here is to remove that expectation in front of the sup. So currently, the average version has two expectations. One is over the randomness of Z1 up to Zn. And there's another expectation of the randomness that we created over sigma 1 up to sigma n. And you have this sup. And we are going to claim that this is basically similar to this without expectation with high probability. So with high probability. And the probability is over the ruggedness of Z1 up to Zn. So you still have to draw Z1 up to Zn. But for most of the choice of Z1 up to Zn, there's two things are similar. So this is the random variable that depends on Z1 up to Zn. This is a random-- this is just a constant. This number is not a random number. It's-- probably shouldn't call it constant. This is a deterministic number, right? So this number depends on Z1 up to Zn. And I'm claiming that the second one, the right-hand side, is concentrating around the first one with high probability. So and if we can do this, then that's what we I alluded to before. So now if this is defined to be empirical Rademacher complexity. I guess, let me have a notation for that. This is called, I think let me just-- I think in the notes there is a formal definition. But here, I mean, just for the sake of time, let's just define this to be R s of f, where s is the set of Z1 up to Zn. And this is called empirical Rademacher complexity. And you can see that the original Rademacher complexity, the average Rademacher complexity is the expectation of the empirical Rademacher complexity, where you take expectation over the set s, right? So just because these two things only differ by a single expectation. And so, if you can do this, then you have a high probability bound. You don't have to average over Z1 up to Zn. And also, you can do the same thing for the left-hand side for the uniform convergence thing. So recall that before we only prove that the expectation of Z1 up to Zn sup, this minus ef, we want to prove that this is less than Rademacher complexity. So this one, we will also show that this is approximately equals to this with high probability. I guess, I should say, the later one, this one, later one is a random variable that depends on Z1 up to Zn is approximately equal to the expectation with higher probability. So if you have both of those, then you basically remove the expectation from your equation and you get a high probability bound. Does that make sense? Is there any questions? So basically, eventually we're going to prove this. So let me state the formal theorem. We can prove that you seem all the f is are bounded, then with probability, at least 1 minus delta, we have the sup-- so here, I don't have expectation. This is the-- there's a ran-- over the randomness. of Z1 up to Zn the sup of this is less than 2 times the empirical Rademacher complexity, the average Rademacher complexity, plus additional cost, additional term, which is the log ln of 2 over delta over 2n. So you pay additional small term, which is on order of 1 over square root of n times something logarithmic and a probability delta. So and but basically by paying this, you get a high probability bound instead of an average version. I think the proof here-- the proof is actually relatively straightforward. It's basically just applying [INAUDIBLE] inequality. But maybe let me do that in the next lecture. I think it takes probably 10 minutes. Maybe let me start with the remark. So I guess, typically, this ln 2 delta over n, this is typically much smaller than either the Rademacher complexity or the-- either the empirical one or the population one. And the reason is that these two things will be something like square root something over n. And this something depends on the complexity of f. It's something that is not negligible. But here, you have square root-- a logarithmic, some logarithmic term over n. So that's pretty much the smallest thing you can think of, right? So logarithmic, it's kind of like a constant. So your complexity of f wouldn't be on the logarithmic of anything. It should be something bigger than that. So that's why typically this additional term is negligible. So that's why basically you can think of this, you didn't lose anything by doing the empirical version. And it's interesting that what you lose here at least on this level-- what you lose here doesn't depend on complexity of f. So basically, if you-- so this term depends on complexity of f. But what you lose here between the expected empirical and population don't depend on complex of f. And maybe, I think this is a perfect time for doing the second remark, remark two. So and I guess, Rnf or Rsf, they are both translation invariant. So what does that mean? That means that suppose you have f prime, which is equal to a translation of f, which means that this is a family of function f prime Z, which is equal to fZ plus the universal constant Z0. So for every function in capital F, you have got a function capital F prime, which is just the translation. You just add some Z0 to it. Then they have the same empirical Rademacher complexity. And in some sense, we have seen this derivation somewhere in this involved derivations before. But let me just make it more explicit. So the Rademacher complexity of this is, you look at the expectation of sigma. And you take the sup of sum of sigma i f i prime, f prime Zi. And you plug in a definition plus the Z0. So now, you can put the part about Z0. I think we have seen the same technique before. Because Z0 is not a function of little f. So you can put it out. So you get plus 1 over n times sum of sigma i times Z0. And then you can swap expectation with the sum. So you get expectation sigma is sup plus expectation of 1 over n times sum of sigma i Z0. And this becomes 0 because sigma i is a binary-- or is a Rademacher random variable. So then this is Rs of f. So in some sense, this is a property of the Rademacher complexity, which is somewhat interesting. You don't care about the translation. But you do care about the scale. So if you scale everything by 1/2 or by 2, then you would change the Rademacher complexity. But it wouldn't change when you shift things. So it's about the relative differences between functions in f. It's not about absolute size of f. Or so, for example, if the function of f always takes values between 1 [INAUDIBLE] and 1 [INAUDIBLE] 1, that's not very different from taking values between 0 and 1. OK. I think this is a natural stopping point for today. Any questions? [INAUDIBLE] First of all, it's not always the case that-- right, that's a good question. So I claimed this vaguely without any justification. So why the Rademacher complexity should be like this, like 1 over square root of n. So I think it's not even-- so I should say, it's not actually exactly true. For most of the cases-- actually for all the cases we are going to see in the lectures, it's 1 over square root of n. But in some cases, it could be-- the dependency alone could be a little bit different. So yeah. So sorry. I was not quite clear. And I think-- I'm not sure whether that question is still in the homework. I think in the homework question there used to be a question where you have other dependency on. I think I probably removed that question for this year. I remember I remove it just because it's not that relevant to the overall goal. But there could be other dependencies. For some reason, it's always like, it's mostly the cases 1 over square root of n, I think the reason is that even you look at a single example, you don't take the sup. We just look at it-- you do the wrong thing. You say, I fixed my function and then I draw my data. I look at how different the empirical one is different from the population one. That's always 1 over square root n without any doubt. So that's why you can never go better than 1 over square root n. But you can be worse than 1 over square root n. I'm not sure whether that makes sense, right? So even you look at a single-- the concentration of a single-- at a single f, right, you fixed the function h, you draw the random variable Z1 up to Zn, and you still have some fluctuation on all the 1 over square root n. And so, you cannot repeat that. But it could be worse than that. [INAUDIBLE] From definition of the Rademacher-- I think can still see that to some extent. Because if you look at the sum of sigma ifi, maybe that's just be here. So this is still a sum of n terms. And so, even you don't take the sup, this term would be something on order of 1 over square root n just because of the concentration. You have sum of n terms, each of them is on the order of 1. And so, the sum of the n terms is on order of square root n. And then you divide by n, you get 1 over square root n. We can talk more offline maybe. yeah. Sounds good. I guess I will see you on Monday-- on Wednesday. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_3_Finite_hypothesis_class_discretizing_infinite_hypothesis_space.txt | OK, now, let's talk about math. So last time, where we ended was, we were talking about uniform convergence. So we said that our goal for the next few lectures will be the so-called uniform convergence, which means that you want to somewhat prove that with high probability, if you take sup on maximum-- like a sup really just means maximum for this course. So if you take sup over the hypothesis class and you look at the difference between the empirical risk and the population risk, you want to show that this is small with high probability. So this is the general idea. And we said that this is different from showing that-- so this is different from-- so for every fixed h with high probability, L hat minus Lh, is small. So these two are of different nature because one-- so the order of the quantifier in some sense is different. One requires that if this event-- with higher probability, the event that the entire population risk is close to the empirical risk, right. So the other one is saying that you just only look at one single h, and you look at what's the probability that this population risk is different from empirical risk. And you want to show that this event happens with high probability. So in some sense, the difference is kind of like a union bound in some sense, which I'm going to talk more next when we get to prove this kind of statement. So I mean, this lecture, we are going to talk about two ways to do this. Actually, we are going to talk about two cases about h. So certainly, this statement depends on h. You cannot hope to prove things like this for every possible capital H. It does depend on the family of hypotheses you think about. And the bound actually depends on the family of hypotheses you are talking about. So the first part is going to be about finite hypothesis class where h is assumed to be finite. And the next part is going to be infinite case, infinite hypothesis class. And for infinite hypothesis class, there are many different ways to achieve this kind of bound. And today, we're going to talk about a relatively brute force way to do it. In some sense, you do a reduction to the finite hypothesis class. Essentially, no matter what you do, you are doing a reduction to the finite hypothesis class. But how do you reduce to the finite case does matter. So in today, we're going to talk about the brute force reduction, which does show some kind of intuition. OK. And so that's a brief overview of what we're going to do in this lecture. So I guess let me just start by-- let's talk about the finite hypothesis. And here is the theorem we're going to prove. So there are some conditions. So the condition is, as we did last time, recorded last time, we assume the loss is between 0 and 1 for every xy and every hypothesis. And this is true for binary loss. It's 0-1 loss. It's not true for every possible losses, but if you have other losses, you have do a little bit small, kind of like, fix to make these proofs still work. But this is not very essential. It's mostly for convenience. And what we're going to prove is the following statement. So we say that for every delta between 0 and one half, this is not very important either. So delta is a small number. And you will say that with probability at least 1 minus delta, we have that for every h, L hat h minus Lh, an absolute value is bounded by square root ln, the size of h, plus ln 2 over delta over 2n. And recall that the reason why we care about this uniform convergence was that it's useful for us to bound the excess risk, right? So we have shown that if you have this kind of uniform convergence, then you can prove that your excess risk is bounded. So using what we have discussed last time as a corollary, we also get L excess risk of Lh hat-- h hat is the ERM-- minus Lh star. So this is the ERM solution. Lh hat minus Lh star is less than-- you pay a factor 2 in that derivation, so you multiply the factor 2. So you get something like 2 times ln size of h plus ln 2 over delta over n. OK, cool. So this is the theorem we're going to prove. Before we prove the theorem, you can see that the bound-- the right hand side of the bound-- does depend on the size of the hypothesis class, right? If you have a bigger hypothesis class, then your bound would be worse. So it's harder to prove this uniform convergence when you have a larger hypothesis class. And if you try to interpret this bound at the end-- so here, this is bounded excess risk. And we can see that you need n to be bigger than the log of the size of h so that the right hand side of the bound becomes meaningful, right? So you want the excess risk to be something smaller than 1, at least, at a minimum. So you need n to be at least larger than log of the size of the hypothesis class. So that's why you need enough samples, right, to make these bounds meaningful. And, of course, as n goes to infinity, you have better and better bound. I'm going to have more discussion after we prove the theorem. OK? So now let's try to prove the theorem. So I guess the outline of the proof is that first, you would do individual h. You prove this for individuals. You prove the simple version, basically like we discussed the last time. And second, we take a union bound over all h. OK. So let's do the first step. So recall that last time, we have done this already for fixed data. So here, I'm just doing it a little more formally. So last time, we actually showed this, right? We used this Hoeffding inequality to get something like L hat theta minus L theta. This is something like an order of 1 over square root of n. That's what we did somewhat informally last time with the Hoeffding inequality. And today, I'm going to have a little more kind of like careful derivations to get exactly all the dependencies up to constant. And by the way, theta and hr, the same. h is just when you talk about finite hypothesis class, right? You don't necessarily have a parameter. You may just list all the hypotheses. That's why it's called h. And when you parameterize it, you have the parameter theta. But for this purpose, they are not different at all. So let's apply-- this is the last lecture. So let's apply Hoeffding inequality, so where ai is 0, bi is 1. So the bound is 0 and 1, right? So we gather that for every h in h-- suppose this h is fixed and then you draw your sample, you look at the probability that L hat h minus Lh is less than epsilon. And here, what's random? The random is the data set. The randomness comes from a data set which goes into L hat. And if you use the Hoeffding inequality, you get this is larger than 1 minus 2 times exponential minus 2 n square, epsilon square, over sum of beta i, bi, minus ai square. And because bi is 1 and ai is 0, so the sum of bi minus ai square is n. So you get 1 minus 2 exponential minus 2n epsilon squared. And right, this is because bi is 1, and ai is 0. And so in other words, if you look at the other side of the bound, you look at what's the chance that they are different. The chance that they are different is less than 2 times exponential 2n minus epsilon squared. Actually in many cases, Hoeffding inequality was stated in this way, instead of the way that I showed before. They are exactly the same. It's just complementary of each other. But you have a lower bound for some events, then you have the upper bound for the complementary of the event. And now, this is for every h, you have this, right? So basically, for every h, you have some kind of failure event, which is this event. And this event happens with a small probability. And now, let's recall that you have the so-called union bound. And the union bound is saying that if you look at the union of a bunch of events, maybe let's say k events, then they are smaller than the probability of the sum of the probability of each event. And here, suppose you say the Eh corresponds to the event that L hat h is different from Lh from average by epsilon, then you know that the probability that the union of the Eis, which is basically saying what's the union of this failure event? Basically, it means that there exists h such that this event happens, right? So such that L hat h minus Lh is larger than epsilon. So this is the union of all of these events. And it's less than the sum of each of the events. OK. And now, you plug-in what you have prepared. Maybe let's say this equation. Let's call it 1. So if you plug-in 1, then you get-- this is sum over all the hs. And you get 2 times exponential minus 2n epsilon square. So each of these events is small. And you multiply it by the total number of the possible events, which is the size of h. So basically, we have 2 times h times exponential times 2n epsilon squared. OK? And you can see that this is basically what we wanted to have. Because now, we have there exists h such that they are different. So the complementary of this will be just that this is just the equals to 1 minus the probability that for every h, the flipped event is true. By the way, I'm not distinguishing the latter two equal in most of this course-- so technically, I probably should write this is less than epsilon, right, instead of less than or equal to epsilon. But for this part, for this course, I'm not super careful about this, because they don't really matter that much. And in many cases, actually, the probability that this is exactly equal to epsilon is actually 0. So technically, they are even correct. But anyway, this is not super important for this course. Because of this, you can see that this is what we care about, right? For every h, L hat h is close to Lh. And we are already kind of getting there, almost. So the only thing we need is to know what this point is. We need to upper bound this so that we can lower bound this probability. OK? So now, let's choose epsilon so that you can get-- so basically, you want this probability to happen. You want this thing to be bigger than 1 minus delta. So that's why you want this to be less than delta, right? So basically, we just need to choose epsilon so that this probability becomes delta. So choose epsilon such that 2h times exponential minus 2n epsilon squared is equals to delta. And this involves you solve the equation, which is not too hard. So if you solve it, you get epsilon is equals to, I guess, exactly what I had before. So epsilon is equals to ln h plus ln 2 over delta over 2n. So basically, if you take epsilon to be this, then you know that the probability that for every h, L hat h minus-- maybe I start with the existence. Lh is bigger than epsilon, it's less than 1, less than delta, right? And then if you flip the event, you get the desired zero. Any questions so far? OK, so let me have a few remarks to kind of somewhat interpret what we have done and compare it with what we did in the first lecture. So if you compare with the asymptotic results, you're going to see this, right? So for asymptotic results, what you got is that Lh hat minus Lh star-- the excess risk-- is bounded by, is something like C over n plus o of 1 over n. And recall that this C can depend on dimension of the problem. And here, you can hide any other dependencies on the problem. And what we have now is that Lh hat, the excess risk, minus the-- sorry, Lh hat, the excess risk, is smaller than-- so here, you don't hide anything. You hide some constant, of course. You hide this as ln h over n over-- sorry, square root this. And of course, you also have something like O of square root ln 1 over delta over n. So this term is supposed to be relatively small because you can take the delta to be-- this is a logarithmic, right? You can take delta to be something like, maybe, n to the 10, right? So you still get square root log n over square root n, right? So let's say take delta to be n to the minus 10 so that this one will be square root log n over square root of n, which is almost negligible compared to the first term. So basically, let's say we ignore this for the comparison. So ignore this for the comparison, then you can compare this and this. So you can see that-- the first thing is that we have a worse dependency on n, right? Before the dependency, at least in terms of leading term, the dependency on n was 1 over n. And now, it's 1 over square root of n. So it goes to 0 slower. So this could be improved. This can be improved in certain cases, which we probably wouldn't do at all. I guess in one of the homework question, you're going to be asked to improve this to some extent. In some cases, you can improve this to 1 over n, depending on various situations. But generally, I guess you get relatively worse dependency on n in comparatry asymptotics. One of the reason why this is happening is that-- so partly because we didn't assume twice differentiability. Of loss function. So here, the only assumption we have on loss function is between 0 and 1. So it even works for 0-1 loss-- the classification 0-1 loss. But before, we did assume that the loss has to be continuous and differentiable. And I think we also assume it's twice differentiable. So that does play a fundamental role here. So when we don't have twice differentiability and we don't have other assumptions, it's kind of actually impossible to get 1 over n rates in many cases. But here, what I'm talking here is all about the downside of our new bound. So the pros, we actually already kind of like mentioned. The main pro is that now, we don't have any dependencies about anything. So before, we recall that last time, we were motivated to have non-asymptotic bounds, we are saying that this thing could hide a lot of things. This could hide, for example, something like dimension to the 50-- that's my extreme example-- over n squared. So p to the 50 over n squared will be counted as little of 1 over n. So that doesn't make a lot of sense just because if a dimension is too high, then this requires n to be very big to be small. So this was the issue that we mentioned last time about asymptotis. And now, we fixed that issue. And that's the main benefit we gain. So we don't have anything about the dependencies. And also, we expand to see how does this depend on the complexity, in some sense, of the hypothesis class. You can think of this as a complexity. The ln h can be thinked of as a complexity of the hypothesis class. Probably, if you have been through CS 229, we have talked about if you-- you can overfit if you have too complex a function class, but you don't have enough data. And this is, in some sense, a mathematical characterization of that. So if your function class is too complex so that the log of H is too big, and you don't have enough data compared to log of H, then you may have a worse bound. And on the other hand, suppose your log of H is small and your n is bigger than log of H, then you have a better bound which could be meaningful. Any questions so far? How does the [INAUDIBLE]? We are doing-- yes-- or no. This is the differentiating of the loss function. So the loss function is the function of-- depending on how you think about it, but by the differentiability, I really mean this function that takes in y hat and y and outputs a scalar. So taking the prediction, and the real label, and the optical scalar. So whether this function is differentiable with respect to y and y hat. So we didn't assume that this function is differentiable here. But implicitly, you are assuming that this loss function is differentiable with respect to y and y hat in the previous asymptotic analysis, because there, actually, we assume the whole loss function, if you compose it with the model, has to be differentiable. [INAUDIBLE] So-- I didn't hear very-- [INAUDIBLE] practical implementation of floating point numbers-- did you use the same bound? For practical implementations where you have floating points? Yeah. [INAUDIBLE] So I guess my interpretation of the question-- maybe let me rephrase the question a little bit also for the Zoom people on the Zoom meeting. So I think the proposal is that, for example, if you really have a practical model, and you have p parameters, and they are-- when you really implement this in computer, it's not continuous. So you can think of each parameter is described by maybe 32 bits, and then you can count how many total possible number of different models there are, and apply this bound. So yes. And that's a good idea, and that will give you what? That will give you that-- suppose you have p parameters. Let's say you have 32 bits. So 1 bit-- so then what does that mean? That means that the total size of h would be that for every p parameter, you have 2 to the 32 choices, and you multiply that to p-- or you take the risk power to p. And so that means the log of H is equal to something like 32-- like O of p. It's a constant times p. So basically, you only get the bound that depends on the number of parameters. And this is reasonable in some cases. This is not very reasonable in some other cases. But definitely, it's a pretty-- it's a bound that makes sense. So in some of the later parts of the lecture, we are going to see how to get a bound that doesn't depend on parameter. But if you are fine with getting a bound that depends on a number of parameters, then this is indeed a good bound. And this is actual natural question that leads me to the second part of the lecture-- today's lecture. So this is a proposal where you-- the proposal to do this has a small con, or small kind of problem, which is you basically have to say that I have to resort to practical implementation. In practice, I cannot really implement real numbers. All the real numbers, I have to discretize in some way. So that's-- and sometimes, you put additional restriction on yourself, saying that if I can only use floating points, then what bound I can have? So what I'm going to discuss next is that you don't even need this. You can even say that, even for all the possible continuous models which are supposed to have p parameters, and each parameter is really a real number. Like, you can-- for example, suppose you have almighty computer which can have infinite precision. Still, your bound would still look something like this. You still have O of p bound. So then you don't have to-- suppose we have that infinite hypothesis class proof. Then, you don't need this practical-- this way of proving it. You can have a more generalized, stronger way to prove it, and that's what we're looking for. Cool. So maybe let's start to do that. So let's talk about infinite hypothesis class. And as I suggested a little bit before-- so we are going to have a bound that looks like P over n square root. P over n, and P is the number of parameters. So this is something we're going to have. Cool. And so today, we're going to do this so-called brute force discretization. This is a technique of-- at least, this is how I name this technique, I guess. Because this technique is to brute force, I guess there's no real name for it. And what you can do is the following. So maybe-- yeah, let me state the theorem that I'm going to prove first, and then I can tell you what's the intuition and how to prove it. So this is the theorem. So suppose H is-- OK, I guess I'm still setting up. Suppose H is parameterized by theta in P dimension. So H is-- mathematically, you write H is a family of H sub theta. Each H sub theta is a parameterized model, where a theta is in some set of theta, which is a subset of Rp. So capital theta is the set of parameters you are going to choose from. And in some sense, this is for convenience. But I guess you probably wouldn't see why this is only for convenience, but it doesn't really matter. So suppose you only select models from this set, where your norm of the model is bounded by B. Our dependency on will be very-- will be only logarithmic. So in some sense, this is not really a real restriction. You can choose B to be pretty big just because your dependency on B is very relaxed where it's logarithmic in B. So this is our setup. And also let's recall that we're sometimes going to use this notation. This is really-- we use all of these notations interchangeably. So either this is really just a loss of theta-- the model theta on the data point x and y. So it's really just compare H theta of x and y, and you get the loss. These two are just the same thing. We are abusing the notation a little bit. And also recall that we have L theta, L hat theta. This is all as we defined before. And so here's the theorem. So we still have to assume that the loss is between 0 and 1. This is probably always assumed in most of these lectures-- most of this course for every x, y, and theta. And suppose this is additional assumption, where you assume that this loss function is K-Lipschitz in theta. So for every x and y. So what does this really mean? This really means that you are assuming l of x, y, theta, minus l of x, y, and theta prime. If you change your model to theta prime, then your loss would be different by a constant times theta minus theta prime [INAUDIBLE].. So maybe let's try to-- this is kappa. So again, this-- our dependency on kappa will also be logarithmic. So in some sense, this is also not assuming much, because if your loss is somewhat continuous, then you can have a-- it's going to be Lipschitz to some extent. Probably, the Lipschitz constant is not very good, but the Lipschitz constant would be something reasonable. And if you take logarithmic of it, it's not very sensitive to the Lipschitz constant. And then with this, you get with probability and the least 1 minus, I guess, O of e to the minus P. So actually, you have even higher-- even lower failure probability. The failure probability is e to the minus P. So with such small failure probability, you get that for every theta, you have the uniform convergence is less than some big O of square root P over n times max 1, and ln, kappa Bn. So eventually, the dependency on kappa and B are logarithmic. That's what I promised. And the main thing is really P over n, so you get the parameter dependency and you get the dependency of it. And you still have the square root here, so this is still worse than the asymptotic bound if you compare with the leading term of the asymptotic bound. But as we said, you don't have the second order term in an asymptotic bound. So how do we-- how to prove this? So actually, the proof is very similar to what was suggested in the question. So you are doing this quantization, and then you deal with the discretization error separately. So what you do is the following. So let me start with a sketch in some sense. So the kind of alternate sketch is the following. So you define E theta-- be the event that-- you have this failure event. L hat theta minus L theta is larger than epsilon. And epsilon is going to be something TBD. But epsilon would be very similar to this thing, because you care about whether these two are different-- how-- this much different. But anyway, so absolutely some number. This is kind of like a placeholder. So you care about this kind of event. And we know that this will be a small probability event as we have shown for the final case. So this E theta is a small probability event. And before we called that-- what we did is which we said that the union of this E theta is less than the sum of the E theta-- the probability of E theta. But now, because you have infinite number of theta, this is infinite, because each one has some small probability event so to fail. And you take the sum over all possible events-- then you get an infinite number anything. Like each of these will be some epsilon. You take the sum of infinite number of things-- you get infinite, so that's why it doesn't work. You cannot use exactly the same thing as before. But the reason why this can be fixed is because this union bound is very pessimistic. So if you think about union bound-- so union bound is really just saying that, I guess I'm not sure-- and depending on how you learn union bounds in previous lectures. Like what I learned about the union bound is the following. For example, you have-- this is the full probability space. And each event takes some part of space. Maybe this is E theta 1-- this is E1 and this is maybe E2. And the optimal-- when the union bound would be tight is when all of these events-- I call it-- they call them failure events-- all of them are destroyed. So suppose this is the case. Then, the union of these events will be the sum of the probability of each of the events. But here, it's not clear whether these events are disjoint. And actually, they may have a lot of overlap. So you have 1 theta-- so E theta. And if you change your theta to your nearby theta, you probably have something like this, which is E theta prime. And they have all of this overlap. And then your union bound starts to be very loose. So that also kind of motivates our way to fix it. The way that we fix it is the following. So we fix it by first picking not-- we don't take union bound over all possible events. We select a subset of events, and with a union bound over them. And then we say the other events will be close to some of this subset of prototypical events. So basically, the idea is that you-- the rough idea is that you select some prototypical typical events. Sorry. Maybe I should just say typical events. Or you just take some exemplar events. I don't know how-- I forgot how to spell that-- some exemplar events. And this subset of events you-- is a smaller set of events than what you finally care about. And then, you use union bound on the subset. And then you say that other events are similar to the subset-- to the exemplars. So then, you cover all the events, so that's the rough idea. So let's see how we exactly do this. So to exactly do this, we need to introduce something called-- any questions so far? So to exactly do this, we need to introduce something called discretized epsilon cover. This is actually also a useful tool for other cases as well. So let me first define this epsilon cover, and then say why it's-- it's kind of like a language to describe what are called prototypical events, or prototypical kind of parameters or models. So epsilon net-- sometimes, those are called epsilon net-- sometimes, it's called epsilon cover-- of a set, S. And here, S corresponds to the family of all models you care about. And you care about a subset of models. And if you-- this-- and with respect-- when you really define it, you have to specify a metric, rho. Rho is a set, C, which is also a subset of S. But I think technically, we don't have to require C to be a subset of S, but I think in almost all cases, it's a subset of S. Such that for every x in S-- so there exists kind of a neighbor in C which is close to x. So if you draw this, it's kind of like you're saying that you have a set of models-- of parameters. This is called S. And the epsilon cover is a subset of S. So as you select subpoints-- and let's call these points-- these are all in C. And then you say that the set of C needs to satisfy the following to be an epsilon cover. So what it has to satisfy-- it has to satisfy that for every point you pick in x in S-- you have to pick this point x in S-- there exists a neighbor-- somewhat kind of a neighbor in C. Let's call this x prime. I guess I cross and x seems to be the same. I'm not sure whether-- maybe I should-- anyway, you see what I mean. The purple cross is just indicating a point. So you have a point x here, and you can always find some other point in C such that x prime is kind of close to x. So that's basically is saying that all of these points in C are prototypical kind of points, because every point in S can find a neighbor in C. Does that make sense? And equivalently, you can also write this in the following way. So equivalently-- this is in some sense more-- explaining why this is called epsilon cover. So equivalently, you can write this as the S is covered by the union of the ball around all the x. Let me write down and explain what this is. So first of all, this is the-- so this thing is the ball centered at x with radius epsilon and distance metric-- or metric rho. So basically, this is saying the following. So this is the equivalent definition of epsilon cover. So you are saying that if you look at all the balls around all of these purple points-- so this is-- the radius is epsilon. So in some sense, you can say that this ball-- so this point covers the entire ball, because for every point in this ball, you can find-- you can use this point-- the center as the neighbor. So basically, every point covers some part of the space. And so the requirement is that-- if you look at all the points that can-- if you look at all the balls around all the centers, then this would cover the entire S. That means that every point in S can be covered by some ball, and that means every point in S has a neighbor in C. Any questions? C might not be-- in some sense, we will insist that C-- like, we will need to find a very small cover C, which is finite. And also, hopefully, we want the size of C to be small. But by definition, there's nothing about whether it's finite or not, but we will construct epsilon cover that is finite. So this is-- so so far, this is only a definition saying that C is epsilon cover of S. And we will try to make C be small. And this is actually exactly-- so this is exactly what we're going to do next. So how do we construct a set, C, that is finite and also covers the entire set? So what is S? So for us, S is the set of all parameters. It's the set of parameter theta with this l2 norm less than B. You only construct a subset of parameters that can cover all the parameters. So here is the lemma that says that you can do this. And you can have finite C, and also, actually, you can have a reasonable bound on how many C's-- how many points in C there are. So let's define this to be-- I guess, for this lemma, I call this theta. So theta is defined as above. So for every epsilon in 0 and B, for every radius, there exists an epsilon cover of this theta, the l2 norm with radius B such that with at most 3B over epsilon to the power P elements. So this is a cover, and the size of this cover is bounded by 3B over epsilon to the power of P. So I think this is actually-- we're going to prove a weaker version. The full-- we're going to have a homework question which guides you to prove exactly this version. So for now, in the lecture, we're going to prove a weaker version which is somewhat easier. So this weaker version-- and also actually suffices for our purpose. So you don't really necessarily need a stronger version to prove the final theorem just because the weaker version is only weaker by a little bit. So I guess the homework will guide you towards the stronger version, which also introduces some techniques which are useful. So here is the weaker version. The weaker version is pretty much like you discretize your computer. You just do a trivial discretization using some grid. So what you do is you just take C-- be a trivial grid in some sense. So what does that mean? It really means that you have this ball-- I guess there-- I guess if you have this ball, and you just say that you take that-- some arbitrary coordinate system. You just take the natural coordinate system and you discretize your space like this. And then you take all these points as your C, and that's it. And then the question is just a matter of counting and how fine-grained your grid needs to be. So formally-- so C is taken to be all the points x in RP such that xi, the coordinate, is a multiple-- I guess this is k. k times epsilon over square root of P. So epsilon over square root of P is my grid size, and k is the integer multiplied with it. For some integer on k where k is smaller than B square root of P over epsilon. Why I have this constraint on k is because at some point, you don't need more points because you already-- you don't have to do anything beyond this part, because if your k is too big, you already offset x, and there is no point. And if you do the calculation, this is the right thing. And so now, we have to do two things. One thing is we have to see how large C is, and the second thing is we have to prove that this is epsilon cover. So let's do the first thing. So why this is epsilon cover is because if you look at any point x in S, you just round it to the nearest point. So when you run it, you run it to-- so you do some rounding-- let's see. I guess when you run it, you run it to let's call it x prime. Let me not write exactly what the long-- the long way just means you take any vertex in this grid and you round it-- the nearest-- any reasonable nearest-- that's what I mean. You just do the trivial run. Let's say we run to a smaller number. It doesn't really matter that much. So if you run it-- so what you got is that xi minus xi prime is less than epsilon over square root of P. Because for every dimension when you're wrong, you increase-- you create epsilon over square root of p error. epsilon over square root of P is your grid size. And that means that the distance between x and x prime in the l2 sense-- this is, I guess-- so I think this is-- I should mention that the rho l2 norm. The rho-- this-- yeah, I should have mentioned this. So the magic we are using is rho is l2 norm. So then if you look at l2 number of these two things-- so this is the sum of xi minus xi prime squared, i from 1 to P. And then you bound each corner. You get P times epsilon squared over P squared root, which is epsilon. That's actually why I chose the grid size to be epsilon over square root of P just because I want to make it epsilon right there. So this proves that it's epsilon cover, right? And also we can count how large C is. So C is what? C is something to the power P, because for every coordinate, you have a bunch of choices for k. And how many choices for k there are-- basically, this was-- here is like k-- the absolute value of k is less than B square root of P over epsilon. So basically, you get B square root of P over epsilon. And because it can be positive and active, that's why you multiply 2. And it can also be 0, so you add 1. So that's the total number of choices in C. And one common is that-- eventually, only log C matters as you'll see. So log C will be P log 2B square root of P over epsilon plus 1. And that's why this weaker version is not super different from the stronger version, because the difference-- so the stronger version was 3B over epsilon to the power of P. And the log becomes P log 3B over epsilon. And if we compare the stronger version with the weaker version, the only thing that's different is the square root of P and the log. So that's why eventually, it doesn't change the bounds too much. Cool. So this is our proof for the weaker version of the lemma. And now, let's use this lemma and the epsilon cover to prove the final bound. So as we planned, what we do is that we first apply finite hypothesis case-- the finite hypothesis analysis to C. And then, you say that-- then you somewhat-- so this may be-- let's say this is number one, and then you say that extend 1 to the whole set, S. So now, the first step should be trivial because we already proved it. So if you want to do i, then basically you got that if you-- the first thing is that you do it for every fixed theta in C. Then, you have probability of this-- similar to epsilon, if you use Hoeffding's inequality, you get this is-- I guess let's call it epsilon tilde, because this epsilon tilde will be tuned to be decided later to make the bounds fit. So you got this is 2 times exponential minus 2n epsilon square root. This is by Hoeffding-- exactly the same thing as we have done before. And then you take a union bound. You got the probability that for every theta-- I guess exists theta in C such that this is not right. It's small. And how small it is? You multiply C with this exponential minus 2n epsilon theta squared. So these two steps are exactly as we did. And if you flip this, you get 1 minus-- so probability of-- the good event happens with high probability. I'm just flipping it. So now, we have to do the second step. How do we extend this for everything in S? And so second-- and we are basically using Lipschitzness. And you can see that this is not really anything super clever. It's kind of like a subset brute force. So just for some quick preparation-- so because l, x theta is kappa Lipschitz-- this is kappa Lipschitz in theta, this implies that L theta and L hat theta are both kappa Lipschitz. Why? This is just because if you average two kappa Lipschitz functions, they are still kappa Lipschitzs. So if f is kappa Lipschitz, g is kappa Lipschitz. f plus g over 2 is also a kappa Lipschitz. And you can prove this by a simple triangle inequality. And you can do this for multiple functions, not only just two functions. You can do it for n functions. So suppose we have this. Now-- so we also know that for-- so suppose we know for every-- so we already know that for every theta-- so L hat theta-- so it's supposed to be conditional on this event. So with some chance-- with a very high probability, this happens. And suppose this happens. This condition-- this event, we want to prove that the same thing happens when we replace C by theta-- by S. And so this means that if you have-- for every theta in-- I guess I call it capital theta, but not S. Capital theta, the ball, you can find some theta 0 in C such that theta minus theta 0 l2 is less than epsilon. This is by the definition of epsilon cover. C is epsilon cover of capital theta. That's why you have this. And then this implies that L theta minus L theta 0 is less than kappa times epsilon. This is by Lipschitzness. And so in some sense, you just use theta 0 as a reference point. So what you finally care about is L hat theta minus L-- so I guess you also know this. You don't know this. You also know L hat theta minus L theta hat theta 0 is less than kappa times epsilon. This is also by Lipschitz. So now, with this tool, you can-- what eventually you want is you bound the difference between L hat theta and L theta. And we have seen this kind of triangle inequality-- this kind of manipulation already. Because eventually, you care about the difference between L hat and L, but you use the theta 0-- some reference points to kind of bridge them. So you do this decomposition. You say that this is L hat theta minus L hat theta 0, plus L hat theta 0, minus l theta 0, plus l theta 0, minus l theta. And now, these two things are about differences between theta and theta 0. So this quantity is less than kappa times epsilon. And this quantity is also less than kappa times epsilon. And this quantity is less than epsilon. This is because theta 0 is in C. So we have already proved that for every theta in C, L hat theta is equal to-- is close to L theta. So that's why we got this 3 inequality. So in total, if you look at absolute value, then you can do the triangle inequality to get the absolute value of the sum of-- the sum of the absolute value of each of them. You get 2 kappa times epsilon, plus-- oh, epsilon tilde. So sorry, this is epsilon tilde, because recall that I used a different epsilon for the concentration just so that I can tune this epsilon tilde eventually. And if I-- now is the time to start epsilon to be epsilon tilde over 2 kappa, or maybe you can do it, and then we want like epsilon tilde to be epsilon times 2 kappa. Then you get-- so that you balance these two error terms, so you get this is less than 2 epsilon tilde. So now, let's look at the-- what's the-- let's go back to here. Because here, there is something about the cover size we have to deal with. We have to plug in the right cover size. And what is the cover size? So the cover size was-- so log cover size-- log C is equal to log 3B over epsilon to the power of P. And I have already set epsilon to be epsilon tilde over 2 kappa, so I need to plug in that so I get P-- oh, let's first to get this, and then let's plug in the choice of epsilon tilde. So we get P log 3B kappa epsilon tilde. And you can see that kappa is inside the log, so that's why it's somewhat not sensitive to the choice of kappa. And also epsilon tilde is also in the log, which is also nice. And now, we have to care about this failure probability. So we basically want to say that this is equal to something like delta. So we want to bound the failure probability to C exponential, minus 2n epsilon tilde squared. So this is something-- we'll show that this is small. Actually, in this case, I'm hoping to show that this exponential minus P. So how do we show this? And of course, it depends on what epsilon tilde is. So you need to choose the right epsilon pseudo such that this is true, and that's basically your final bound. And just to get something-- so you're going to see that the exact calculation of this is going to be a little bit complicated. But just to get some intuition here, so suppose-- so this is a heuristic, which is not even calculated correct, but it's approximately correct. So suppose optimistically that log C is equal to P instead of P times-- instead of P times log 3B over epsilon-- 3B kappa over epsilon tilde. So suppose you just have P. You don't have the log term. Actually, this becomes a very simple calculation. So what you got is that you got-- so basically, if you take the log of this bizarre inequality, you want that-- let me see. If you take the log, you get log 2, which is not super important. You get log 2 times log C, minus 2n epsilon tilde squared. And suppose log C is equal to P. Then you've got P minus 2n epsilon squared. And if you take epsilon-- tilde to be square root P over n, then you get this is equal to P minus 2p. It's equal to minus P. Which means that 2 C exponential minus 2n epsilon squared-- if you take the exponential back, you get this is less than exponential minus P. So this is fundamentally how it works. But we did make this incorrect assumption that log C is equal to P. But this is something not very far, so it's only off by a log factor. So if you want to fix this, technically, you need to deal with the log factor. It wouldn't change much, but it would introduce a little bit of complication. So I did have the calculation here. I'm just going to basically write it down, but I don't really expect that you follow all of this. It took me one hour to even figure out all the constants, so on and so forth. It's not super important. I think the intuition is already there. But let me just quickly write this just to say what you do formally. So if you suppose log C is equal-- you only have this bound. So this is what we only have. This is 6B kappa over epsilon tilde. And then let's take epsilon tilde to be square root-- some constant times P over n, times max. This epsilon tilde is actually the epsilon-- is the final bound, so that's why you're going to see kind of the same thing in your final bound. And C0 is a sufficiently large constant, which we will choose a bit later. And you're plug in all of this, and you just-- again, you take the-- you look at the log of the inequalities we care about-- the log of it is this-- and you plug in this choice of C. You get P log 6B kappa over epsilon tilde, minus 2n epsilon tilde squared. And you somehow know that if you don't ignore this log, it's already work. It's just if you have the log, you still have to deal with it. So you get something like P log-- I'm not even sure whether I really have to write down all of this, but just in case some of you want to have this hard calculation. So you get this. And then you-- somehow, I guess the first term becomes log 6B kappa. You explain the first term. Square root C0 P. I guess I decompose this into-- I guess I'm decomposing the first term. 6 over C0 P, minus C0 P. Log kappa Bn. I guess the way that I always think about this is that when you do the calculation, you always need to check with what happens if you don't have the log. So what-- if you don't have the log, then this term is large constant times P, and this term is P, so that's why it's nice. So eventually, you can-- if you take C0 to be something like 32, 36, I think you can show that this is bigger than this one. And this one I think-- I guess I-- and this one is inactive when P is large. And then you've got this is less than minus P. I guess the exact calculation-- there is some more detailed calculation in the notes, but it doesn't matter that much. So that's what we do. So then basically, this is saying that if you take an exponential-- so you get-- after this inequality, you get 2 C exponential minus 2n epsilon squared is less than 2 times exponential minus P. So this is our failure probability. So basically, with this probability-- so with probability larger than 1 minus O, e to the minus P-- we'll have L hat theta minus L theta is less than 2 epsilon tilde, which is this thing that we wanted. Let me not just copy it again. Cool. So that's the proof. And this proof is a little messy, and this is probably one of the reasons why. If you open up a classical machine learning book, they typically don't show you this proof. So it's just because it's a little messy, but actually, it's-- the reason why I always try to show this proof is that I feel like it's very intuitive, and it demonstrates what's really going on. And also this kind of thing is actually useful for many recent networks, if you're looking-- if you look at the technical low level details. So the fancy Radamacher complexity thing that we are going to talk about next, they are very nice, but sometimes, they don't apply, and you have to really use this. You go back to the most brute force way to think about it. So maybe just a few quick comments about this proof. I guess if you really think about this, this is really saying that you have a generalization error. So it's less than the log of this excess risk up to constant factor, of course, plus epsilon to the k. So this part is from the finite hypothesis case, and this one comes from the-- so this is not k, this is kappa. So this is the discretization error, and this is the finite hypothesis case. And in some sense, you are just trading off these two. And what that means by trading off, these two-- it really means that-- what epsilon you choose. So this one depends on epsilon. So this one depends on epsilon, but it depends on epsilon in a very weak way because it depends on epsilon in a-- by logarithmic. So that makes it very easy to trade off this, because you can pick epsilon to be quite small so that this term becomes small, and this term-- this depends on-- sorry, I think, technically, this should depend on log 1 over epsilon. So the smaller the epsilon is, the better the second term, but the worse the first term. But the first term increases as epsilon goes to 0 very slowly. So that's why you pretty much can ignore the second term in some sense, just because you take epsilon to be very small so that the second term becomes negligible. And even for those small epsilon, the first term is still reasonably bounded. And that's why you can make this trade off really nice. But in some other cases, as we'll see later, we will do the discretization-- the first term wouldn't be as nice as this. It wouldn't be log 1 over epsilon. It would be something that goes to infinity as epsilon goes to 0 in a faster rate. So it probably-- in the later case-- later-- sometimes, this first term will be log over epsilon squared. Then, the trade off will become a little bit more tricky, and you have to be more careful about the trade off. And finally, just to-- for-- this is probably an overview for-- this is from a somewhat bird's eye view. So log H of P in this case. Or the P, you can think of this as complexity measures. I guess I've mentioned this as well. So these are complexity measures of the hypothesis class. And the general phenomenon-- it's always like you have a bigger H. It means that you need more samples. This is always-- you have worse bound, which means you need more samples to learn. And in some sense, the next one or two weeks, we are talking about a more accurate-- I guess accurate may not be the right word-- a more fine-grained hypothesis-- more fine-grained complexity measure. So what is the right complexity measure? There is no really super decisive answer what's the right complexity measure. In some sense, it's up to the theorem prover. But we're going to have a more fine-grained and more, in some sense, fundamental complexity measure in the next two lectures, which is called Radamacher complexity. And you can use that to derive many of these bounds in more principled ways. And in general, I think one of the important questions for-- especially in somewhat classical statistical machine learning-- is to find out what's the right complexity measure for your hypothesis class. So we're going to discuss what does it really mean by right and wrong. There's no unique answer, but there are some kind of-- but this is the central question. So you need a complexity measure that really captures the fundamental complexity of this class. For example, if you have infinite class, you shouldn't use log H. Log H is not really the fundamental complexity measure for infinite hypothesis class. You probably should use dimensionality. Probably in the later course, we are going to see if you can use the norm of your parameters as the complexity measure. And it does depend on the specific cases. And also sometimes, it depends on data. So this will be what we discuss in the next few weeks. I think this is a natural place to stop. Yeah. I think that's all for today. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_14_Neural_Tangent_Kernel_Implicit_regularization_of_gradient_descent.txt | OK. Hello, everyone. Let's get started. So last time, what we did was the NTK, the neural tangent kernel approach. And so today, we're going to continue with that to finish the last part of the neural tangent kernel approach. And then we talk about the so-called implicit regularization effect. So the last time, briefly, we recall that last time we have done the following. So we have claimed that the are two steps-- three steps in this analysis using an NTK approach. So one step is that you say that f theta x is close to g theta x in some neighborhood. [INAUDIBLE] Oh, wait. Oh, sorry. Yeah, there are too many steps in the setup. So I always forgot some step. The worst step I would forget is that I forgot to record all. This one, you can remind me. But if I forgot to record, then nobody will remind me. So that's the thing I check every time. OK, cool. And I think this is recording. This is recording. And you can hear me on Zoom, right? Maybe-- [INAUDIBLE] it's recording [INAUDIBLE].. That would be great. Nobody seems to say anything. But it sounds like the-- it's recording the-- it's receiving audio. OK. So last time, we said that in some neighborhood, this theta around B theta 0, you can have accurate approximation. And recall that the B theta 0 was something like a neighborhood of size, something that depends on a sigma. So B theta 0 was defined. We show that if you look at the neighborhood, where you have some-- where a theta is close to theta 0 with distance, something like a square root n over sigma, then you indeed have something-- I guess you get how well the approximation is. I think the approximation in this neighborhood-- the approximation error is something like beta square root n-- beta n over sigma squared. All right, that's what we had. And also in this neighborhood-- two, in this neighborhood, so there exists a global minimum error 0. Nobody seems to respond to my request about confirming that the audio is working. But I don't know. There are only four people here. Oh, OK. Well-- OK, so you can hear me. Thank you so much. Great, great. Thanks. It's just sometimes I get paranoid by this. OK, thank you so much. OK, cool. So all right. So this is what we proved last time. And we discussed here that this quantity is the key thing, right? Beta over sigma squared is the very key thing. And if it goes to 0, then that's great. Because your error becomes smaller and smaller. Oh, I see. I cannot hear you. That's the problem. I see. Probably now I can hear it. I think my speaker is very-- the volume is very low. OK, cool. Thanks so much. And now, the third step as we discussed last time, is that-- and we have discussed the various cases, where this Beta sigma squared is close to zero, we discussed two cases. And today we are going to talk about the third step, where we show that optimizing with f theta within a neural network is similar to optimizing with g theta. And in some sense, the only thing you care about is an analysis of the optimization for f theta. But you want to do this kind of like relationship, so that you can make the optimization easier. And we also briefly discussed what we do with this optimization. I think there are two ways or one way you-- there are two ways to deal with the g theta. So I think there are two ways. One is something like using strong convexity. And the other is using only the smoothness. And today, we're going to focus on this case, which doesn't require too much of the background about optimization. All right. So now, let's go into the detail. By the way, I think a small remark before I go into the detail-- so why you care about the step three in some sense? Right? So a priori there's no reason. OK, so there's one reason, which is that you want to understand what happens when you optimize over neural networks, right? But suppose the solution-- at the understanding, suppose at some point like we are at a moment, where we want to prove this number 3 but we haven't succeeded. But you probably would question yourself, why I care about such an answer. Even if I prove this answer, why it's interesting? And the answer is yes, it's not that interesting because if you prove that optimizing over a neural network is the same as optimizing some linear model like a kernel method, then why not just use a kernel method, right? And it turns out that that's indeed true. If you use kernel method, you are not-- it's not going to work. If you optimize neural network in this way, in this particular iteration, with this particular learning ratio and so forth, it wouldn't work as well either. So in some sense, this is-- so this whole thing, the value of this theorem is only for showing that under certain regime, optimizing of a neural network is the same as optimizing of a linear model. But there is no any bigger impact in some sense just because you are optimizing the neural artificial network in a weird regime, which is the same as optimizing kernels. And in this regime, nothing works very well. But still, for the technical reason, I still try to go through this. It's not super complicated. And I think, in some sense, the techniques also is kind of useful partly because I think it's somewhat surprising. Because at the first place, you wouldn't probably believe it, right? Why you believe that optimizing neural network in any case would be similar to optimizing kernels. And this shows that that's possible in some cases, even though that case is not that informative or that kind of useful. OK. So let's analyze. So we start with the first step. We start with analyzing the optimization with g theta. And this is really just like linear regression, right? It's really understanding optimization of linear regression. So the problem is really you take the min of this y back minus phi delta theta to a norm square with GD. And just to briefly recall the notation, so phi is this feature matrix, which is of dimension n by p. So this is n by p matrix. And each row is something like right f theta and 0 x i-th transposed, I see. You put all of these as rows. What exactly this phi matrix is, it doesn't matter so much anymore for the rest of the discussion. Because phi is just a matrix. And delta theta is the difference between a theta and theta 0. And we are just optimizing over delta theta. And what we do is we just take gradient descent. And you'd say the gradient descent as you take the gradient. And the gradient is phi transpose times y vec minus phi delta theta. All right. This is the gradient update. Well, kind of one of the features of this analysis is that you are looking at the convergence, the changes in the output space. And in some sense, this is kind of like-- to some extent, this is the spirit of the kernel method, where you're looking at not the parameter. You don't look at the parameter space, but you look at output space. And when you look at the output space, it's kind of how you're looking at the function space to some extent. But what does that mean? That means that you're looking at the output y at time t plus 1. So this is defined to be just the phi times delta theta and t plus 1. I guess by definition-- let's define that version with t. And you look at how you change the residual over time. All right. So you compare your output, item t, with the target output y vec. And how does this change over time? And this is just the definition and your plug-in-- the definition of theta t, which is delta theta t minus eta phi transpose, y vec minus phi delta theta, minus y vec. And now this requires some kind of rearrangement to make it look cleaner. And how do I rearrange this? I guess I'm going to first group everything on about the delta theta. So you group this term and the term related this. This also delta theta t. So you get phi minus eta phi transposed phi. I think I'm missing something here. What's wrong with this? One moment, sorry. Yeah, OK. I think I'm right. So if you look at what's-- multiplied in front of this term, there's this one. This one, right? So there are three things-- multiplier in front of theta t. So this is the multiplier in front of theta t. And then you look at the multiplier in front of y vec, then you get minus eta phi phi transposed plus identity y vec. And interestingly, this, you can write it as I minus eta phi transposed-- sorry-- phi. I'll write this as eta phi phi transpose times phi, delta theta t minus I minus eta phi phi transposed, y vec. And then you get I minus eta phi phi transpose delta theta t minus y vec. All of these are basically just the standard calculation. I think some version of-- if you take some version of a linear regression course, then probably you would have seen this. You got this. So that's the update. That's the recursion for the residual of the output. And you can see what happens is basically the residual in the previous round got multiplied by this matrix. And what this matrix is? This matrix is a matrix that is smaller than an entity, because you have I minus something. And the something is a PSD matrix. So you are shrinking your residual in some way every time. And you can quantify how fast you are shrinking. So when eta is less than say 2 over sigma max, phi phi transposed. And let's define this quantity. Let's call it 1 over 2 tau squared. Then you can show that this convergence-- then I minus eta phi phi transpose can be shown to have operator norm less than 1 over eta tau eta. So I think I have some eta sigma squared. And then you have-- so this, also this is a moment. But suppose you have this, then you know that y t-- y-hat t minus y back into norm is less than the operator norm of this matrix times-- sorry, let's take this t plus 1, y vec 2 norm. So this is less than 1 minus eta sigma squared, y-hat t minus y, 2 norm. And if you do a lot of recursions, you get 1 minus eta sigma squared to the power t plus 1 times y vec0 minus y, 2 norm. So you have an exponential decay of error. [INAUDIBLE] Yes, that's a good point. So sigma is the minimum value of phi. And let's prove this right now. Let's say this is number one. But suppose you have number one, then you have all of this exponential decay. So now, let's prove the number one. So how do we do that? So establishing 1-- so intuitively, it's just really that this is a matrix PST so that's why you subtract something by your operator norm less than 1. All right. But we need to make sure we know exactly how small it is. But it cannot be-- it has to be strictly less than 1. That's why we needed this equality 1. And to see this, I guess there are multiple ways to see it. But I think the way I tend to think about it is that-- so first of all, sigma is sigma min of phi, which is also the sigma min of phi phi transposed square root. This is just by the standard property of the singular values. So I guess one way to think about this is that if you look at the eigen single-- either eigenvalue or singular value-- because this is a PSD matrix, so it doesn't matter-- value of phi phi transposed, then let's say suppose the eigenvalues are tau 1 squared up to tau 1 squared, and then tau 1 squared is equal to tau squared. That's the definition of the sigma max, right? And tau n squared is the sigma squared. This is my definition of the sigma min. And then the I minus-- we care about this matrix. So this one has singular value-- eigenvalue or singular value 1 minus eta tau1 squared up to 1 minus eta tau-n squared. Right? This is just because I has singular value of 1, so I'm not sure whether this is-- do you think this thing requires a proof? This is just because, I guess, there are many ways to see this. I guess the best way for me to see it is always take eigen decomposition. You say, phi is u sigma v transpose where sigma is this matrix with tau-1 up to tau end. And then I minus phi phi transposed eta transpose is just I minus eta u sigma squared u transposed, right? And I is equals to u u transpose. Transpose, and then you get u times I minus eta sigma squared u transpose. And this is the SVD for all the eigen decomposition or SVD for I minus eta phi phi transposed. And that's why what's inside is the eigenvalues or the singular values for the resulting matrix. All right. And then now you bound this, right? But if you care about the operator norm, you care about the largest, the absolute value of the eigenvalue, right? So basically, that's I minus eta phi phi transpose the operator norm is less than the max over j, 1 minus eta tau j squared-- absolute value. And I think your eta is guaranteed-- the choice of eta is trying to guarantee that you never get to the negative side. You try to make sure that eta is less than 1 over 2 tau squared. And tau is the largest one. So that's why even the largest one is-- even the largest tau will not make the 1 minus eta tau squared negative. So everything is positive. So then this is just equals to 1 minus eta tau-n squared, which is equals to 1 minus eta sigma squared. OK? So it sounds good. Yeah, in some sense these are sometimes the basics of optimization. But this course doesn't require optimization. That's why I'm providing some basic tools here. And all right. OK, so basically we are done with the analysis of this linear regression, right? After you have this, you know that you are-- your error is decaying exponentially fast. And then after sufficient number of iterations, your error is small. All right. So basically, maybe let's call this two. So from two, you have "for T is at most something like log 1 over epsilon, over eta sigma squared, iteration. So your error y-hat t minus y vec is less than epsilon times the initial thing. And you can take epsilon to be small so that you can-- you can take epsilon to be very small, because it's a logarithmic decay of errors. OK. So this is the analysis for g. And now let's talk about analysis for f. All right. So you will see that the analysis for f is very similar to this but with some tweak. Maybe let me state the theorem just so that we have a formal statement somewhere. So there exists a constant, say, c0 and 0, 1 such that when this key quantity beta over sigma squared is less than c0 for sufficiently small eta. And eta could-- this could depend on-- which could depend on beta, sigma, maybe the dimension p, so and so forth. I think you can have a concrete bound for how small eta has to be. But I am just lazy. I want to, but I'm lazy. But I want to have these details. I'm also lazy, but I also want to have these details. So that it's not too complicated for the explanation. But when you have sufficient small eta, then in T is equals o of log 1 over epsilon, over eta sigma squared steps, the empirical loss for the f theta t is also less than-- is less than epsilon. So the empirical loss for the neural network is also having this error epsilon. So how do we do this? So I guess we have kind of discussed the intuition already. So the intuition is that you always have to try to kind of relate this to the g. And here, by relating to the g, basically, you're just trying to follow-- what I mean is, you really try to follow the proof you had before. Just try to imitate as much as possible. And of course, there will be some differences. And then you will deal with the differences. So I think one difference is that-- by the way this is a proof sketch, because I'm going to omit some small technical jargon, which is not super important. So the important thing is that you have a changing phi in some sense. So this is what's the difference in neural networks, when you have neural networks compared to the linear regression case. So you'll see why this is the case. So suppose you define this phi, phi superscript t to be the kernel at time T. All right. So this is the-- this is the NTK kernel, when you Taylor expand at time T. And if you try to expand at time T, then the gradient descent-- I think we have discussed this before, but now you can see it explicitly. I think I have claimed that if you Taylor expand at time T, then the gradient in respect to the approximation is the same as the gradient with respect to the origin of the neural network, right? So this is what I think you wrote. At the very end here, this is a remark. So if you Taylor expand at time T, then the gradient with respect to neural network is the same as the gradient in respect to linear function. This is just because these two things agree. These two things agree at this point up to first order. So that's why even you compose with the loss function up to the first order, they still agree. And here, you can even see that explicitly. So suppose you write down the gradient of the loss function-- I'd say at t-- then what you've got is that you have 1/n. You can do the chain rule. So you can get this is y i-th minus f theta t x-th i-th times gradient f sub theta t times x i-th. You can verify this just even-- without using the remark I had, right? This is just the chain rule. And then this is equals to-- if you view this as this is from the phi, this is the ith row of the phi, right, so this corresponds to ith row phi. And this corresponds to the difference of-- maybe let me write this more explicitly. This is y i-th minus y i-th t times the gradient. And then if you write this as a vector for matrix modification formula, you got phi t which corresponds to this one. And then you have y vec minus y-hat t. And there's a 1/n over n in front of it. Right. So that's the gradient. And that means that the update rule for theta t, I think I somehow have a theta here. Yeah, updated rule for delta theta t-- I guess, I'm going to use theta t instead of theta theta t. They are the same, right? They are just only different up to a translation. So theta t plus 1, is equals to theta t minus eta times this gradient. And this is equals to a theta t minus 1/n times phi t y vec y-hat t And-- OK. So now there is a little bit kind of like a small thing here. So suppose you say, you give a name called this b t. So then this is-- let's say-- so eta 1/n. So this is theta t, eta b t. So I'm going to try to-- OK, so what's our goal? Our goal is to try to deal with recursion for the y's. That's what we did before, right? A recursion for the y's. And how do I get a recursion for the y's? I have to look at how the y changes. What is the y? This is one entry of the y-hat-- the output at time t plus 1. But I also write this as of something related to the time at the function output at time t. So how do I do that? This is a nonlinear function. Before, we just do a linear multiplication. Because before we just know that if this were g, then this is just equals to phi times delta theta, t plus 1, right, if this f was g. But because this is nonlinear, we have to do something. So what we do is we try to Taylor expand at time t. So that you can have a relationship between theta t and theta t plus 1. So if you try to expand, then you have to write the gradient of f say that t x i and times the differences between the two iterate and plus something high order, right? And if you look at what's the difference? The difference is a function of eta. All right. The difference is exactly this eta b t. So that's why we can write plus the gradient of f-theta t x i times-- minus eta b t and plus-- the second order term will be a quadratic. And this will be quadratic in this way if I'm going to write it somewhat informally. And more formally, I can write this as something-- a function of eta squared times something, because the difference has an eta in it. So that's why you square it. You'll get eta squared. And sometimes this is a term I want to ignore. I'm trying very hard here just because I want to ignore this term. And the reason why we ignore it is because it's eta squared, right? So basically, what I'm going to say is that Mt, the constant, is not a function of eta. So if you fix everything else and just take eta to be very small-- so if you take eta to go to 0, then the second order term, the eta squared Mt term is negligible. We can do this more formally, but I don't want to go into so much jargons. But there is a way for you to kind of bound Mt by something, right? Whatever you bound it to, right, so you get the bond for Mt and then you just say that if eta is small enough, so that eta squared of Mt becomes negligible. That is basically how you formally do it. So if you ignore this second order term, then everything becomes so simple, right? So for now, let's ignore the second order term. Then what you have is that y hat t plus 1. This is equals to-- if you put this equation, let's call this equation 3 in vector form. Now you've got y t hat-- y hat t plus 1 is equals to y hat t. Because this is y hat t. And plus, this linear function of b t, right? So what is this? This is really eta-- minus eta times phi transposed times b t. I guess actually this is right. So yes, phi transposed times b t. And then plus something like eta squared times some constant. I'm going to keep this just for a little bit, so that-- but essentially, I want to ignore it. And then you can rewrite this as y hat t minus eta phi transposed. And what is b t? b t is this difference between theta t and theta t prime. This is something like this, right? Let's go back to that. So this phi-- wait, I'm not sure why I'm missing like I have a constant here. Oh, I see. I see. So I think this 1/n-- I guess there's some mismatch in my notes. But it doesn't matter. So let's have the 1/n here. Well, like for the linear regression case, I didn't have the 1/n in the loss function. And now I have the 1/n. And that's why there's a little bit of mismatch. But it's not a fundamental difference. So let's have the 1/n. And then you have-- actually this is phi t. So this is phi t. And then you have eta phi T phi T transposed times y vec minus y hat t times 1/n. How do I do this? So let me ignore the 1/n forever. Because you can redefine a loss function whatever you want, right? So let's say just we don't have 1/n in the loss function. Sorry. So then you have this. And then this becomes-- if you subtract y vec from both of these and then you will reorganize this, so you're going to get this is equals to I minus eta phi t phi t transposed, y hat t minus y vec. I think there's-- somehow, there is a little bit problem with the-- there's a little bit. I think this is a plus. This is a plus. Right. OK. But the point is that basically if you compare this equation, I guess, technically, you still have some eta squared term, which we don't care. If you compare this equation with the recursion before-- the recursion before was this, actually, maybe here-- the only difference is that this matrix is different. But before you are multiplying with a fixed matrix phi phi transposed. And now you are using phi t phi t transpose. But everything could be the same if this phi t-- but you don't necessarily need it to be the same, right, in the original proof. You only need this matrix to be smaller. You only need I minus eta phi phi transpose for this matrix to be smaller than an entity. Right? So suppose we ignore the eta squared M term, because it's second order. And then suppose theta t minus theta 0. Suppose you are within sigma over 4 beta at time t. So suppose you are not very far away from the theta 0, OK, so then you know that phi t minus phi in 2 norm is less than sigma over 4. This is by Lipschitzness of phi. This is our assumption. And that means that sigma min of phi t is not very different from the sigma min of phi, let's say, minus sigma over 4. This is the largest, 3/4 times sigma. So sigma min of phi t is also good. So you still have a lower bound for the eigenvalue. It's just a little bit weaker up to a constant factor. And that means that I minus eta phi t phi t transposed operator norm is less than 1 minus eta times 3/4 times sigma. All right. So very similar to the before. So but there is an assumption here. So this sounds great, right? But there is an assumption, which is that theta t is not very far away from theta 0. This is something you cannot take for granted. You have to prove it is the case. So that's why we have to inductively-- so basically, the only thing left is that we need to inductively prove this. Prove theta t minus theta 0 is never too big. Basically, that's the thing. And in some sense, this is expected. This is expected because-- in some sense this is expected because recall that delta theta hat the 2 norm-- theta hat was the global minimizer. This is the global min that we constructed in the last lecture. So we said that there is a global min of size squared root n over sigma, right? And because there's a global min with this size, and if this is much, much less than sigma over 4 beta-- so when beta over sigma squared is sufficiently small, right? So I guess this-- when beta over sigma squared is sufficiently small, then you have this inequality. And that means that there exists a global min within this region sigma over 4 beta. If there exist a global min within this region, why you should leave this region? Right? That's why you should somewhat expect that it's always within this region. And how do we formally do this? I think you just say formally, you just do an induction, right, because you know that-- I guess a square root. We know that 1/n-- you see where I made the mistake here. So 1 over square root n times y hat t minus y vec-- 0. This is o of 1. Because every entry is on the order of o of 1. So you have n entries. And then in that way, you can show that-- this implies that we actually have an exponential decay of error. But actually, even we don't care about that, you still have for every time t, you have this. And if for every time t you have this, then it means that 1 square root n phi sigma t minus sigma hat 2 norm is less than O of 1. Because theta hat is the ground truth, right? So this is because phi theta hat is equal to y vec. Theta hat is the construct-- the one that we constructed the last lecture. And then this means that theta t minus theta hat 2 norm is less than square root n over sigma, which is exactly right. You are saying that your iterate is not very far away from the target theta hat. And then you also know that your target theta hat is also not very far away from-- so we also know that the target, this is also less than-- I guess there's a big O here. Big O of square root n over sigma. Because this is what we did last time. And then by triangle inequality, we got theta t minus theta 0 is less than theta t minus theta hat. Right? So theta t minus theta hat and theta hat minus theta 0 2 norm which is less than O square root n over sigma. And this is less than sigma over 4 beta if beta over sigma squared is less than-- much, much less than what 1 over square root of n. So this is how you inductively maintain the distance between, let's say, how do you inductively show that theta t is not very far away from theta 0. Yeah. The step sounds a little bit complicated. But actually, the intuition is very simple. There are probably many ways to prove this. I just presented one way. So there's already a global minimum there. So there shouldn't be any ways for you to leave. And what you do is basically you say that you have a theta hat here. You have theta 0. You know that these two are-- the distance between these two is of square root n over sigma. And you are optimizing. And in some sense, theta hat is your target, right? Because theta hat has the best fitting. So you are somewhat moving even closer to theta hat. So why not you should have even bigger distance eventually afterwards, right? So that's why this is working. So if you look at the iterate, I think you are somewhat moving to theta hat. So OK. So enough. Out of all of this, so then we got this equality. And with this equality, we got that y vec t plus 1 minus y vec-- y hat t plus 1 minus y vec is less than-- t minus y vec, 2 norm, minus eta times 3/4 sigma squared. And then you can do a recursion to get exponential decay of error. OK. Any questions? I think I made a small typo somewhere in the assumption of the theorem. I need to fix that. I think my assumption should be that this is less than c0 over square root n. But it doesn't really matter, because you can make beta I think-- if you change as we see last time. Or you can make either them with bigger or the alpha bigger. You can make beta over sigma squared arbitrarily small too. So it doesn't really matter very much. Any questions? [INAUDIBLE] Sure. [INAUDIBLE] Yup. [INAUDIBLE] I guess, I think, there's one version too. Let me rephrase your question and let me know if it's not what you asked. So I guess one question you could ask is that whether you really rely on the exponential decay for the kernel case to have this relationship between neural network and the kernel. I think the answer to that is no. So the second type of approach that I somewhat outlined last time but didn't really go into detail, that approach didn't require that you have exponential decay of error. So in that case, both the kernel and neural network, you can only show them to have some polynomial speed of decay, like the error is polynomial in t. So you can still make this relationship. So exponential decay is not that important. But I think this is actually something that people realize after the first few papers. At the very beginning, the very first paper using this exponential thing, and people thought that because you have exponential decay of errors so fast, that's why you don't leave this neighborhood. But I think you can do something so that even without exponential decay you still don't leave the neighborhood. Because whether you leave the neighborhood, it probably depends most on whether in the neighborhood there is a global minimum, right? If there is a global minimum in the neighborhood, but somehow you cannot converge to it exponentially fast, that's still probably fine as long as you converge to it eventually. All right. So I'm not sure whether that's what you asked. [INAUDIBLE] OK. Right, right. You do want to say-- and also you want to characterize the neural networks, right? So if they don't have the same property, right, and you somehow can optimize-- analyze the optimization of the neural networks, that's fine. But the relationship is something to help us to bridge the gap between what we knew and which we don't know, right? So the neural network is something we didn't know. But the kernel one is something we knew. And if they are similar, then you can hope to analyze the neural network, right? Yeah. So I think that's why we show they are doing something similar. OK. All right. I think I have a little bit more things to add about the neural tangent kernel. I guess we've discussed this many times. The limitation of neural tangent kernel is that you only, at most, do as well as the kernel method. Right? So they are kind of-- so basically the question is how well a kernel method can work? All right. So are we really characterizing the power of deep learning? If deep learning is only doing as well as kernels, is it really, is it good or bad? All right. And the answer, I think, is that at least I think most people believe in this-- the answer is that neural network can do much better things than kernels. And this characterization of the neural network as kernels is not characterizing the true power of neural network. And you can try to say this in various ways. So there are a lot of papers that tries to do this-- so beyond the NTK approach. I guess, if you search beyond NTK or beyond lazy training, you'll see a bunch of papers, including some of my papers. So we try to analyze deep learning in different regimes. But there is a simple separation if you don't care about the optimization performance. If you just care about the power of the regularization and the-- if you only care about the statistical aspect, you can easily show that neural network can do better things than kernels. And this is an example. So the example is something like this. So I guess this is an example, where NTK or any kernel method is statistically limited. And in some sense, the intuition is that the limitation comes from that the kernel or the features are fixed in the NTK approach. You don't have any adaptability to the data. So your data probably wants to use some features. But you are using a fixed feature for the data. And this is a simple case, where you have to have such a concrete example. So suppose you consider this case where x is in let's say r d And y is in plus 1 minus 1. And let's say each of the xi's are just a uniform like an iid Gaussian. So xi is the i-th index. The superscript is for the examples. And the subscript is for the dimension. And let's say y is equal to x1 x2 So we have a very simple function, which is just learning the product of the first two dimension of the data. So if you draw this, suppose this is x1, this is x2. Then you just have four different combinations. And this is a positive example. This is an active example-- positive example. And this is negative. And this is negative. So so this is not linearly separable. Because you have these four points that are positioned like this. So you have to use a nonlinear model, or a linear model on some feature space. All right. So if you use neural networks-- and suppose you regularize. Suppose you regularize the l2 norm. And this is equivalent to regularize this norm that we discussed-- this norm c of theta, which is something like sum of ai wi. I'm not sure whether you still do remember this, when we have this neural network which is the y is equals to something like a sum of ai sigma wi transposed x. And then you can define this complexity measure which is kind of the path norm, right? And we have shown that regularizing l2 norm of the outer parameters is the same as regularizing this somewhat complex-- and which gives the actually the complexity, the generalization guarantees right. So we have discussed this in some sense. And suppose you use the neural networks, then what you'll find is the best solution. By the best solution, I mean the minimum norms-- so which in the minimum norm solutions is a sparse one, right? It uses a sparse combination of neurons. So basically, the best solution actually you can in this case is you can exactly compute what's the best solution. I'm not going to prove it. But I think it's something relatively believable. So the best solution, first of all, it doesn't really use any other dimensions. That seems to be believable, right? Why you want to use any other dimensions if your function is only a function of the first two dimensions. And you only have to use something about the first two dimensions. You only need the following four neurons. So one neuron computes x1 plus x2. And another neuron compute minus x1 minus x2. And we need another neuron compute-- that computes ReLU of x1 minus x2 and another one that computes ReLU of x2 minus x1. So I claim that this is actually equals to the function. And if you want to verify that, I can briefly do that. So the ReLU of x the ReLU of t plus ReLU minus t is equals to absolute value of t. So this is equal to x1 plus x2 minus x1 minus x2. And now, I claim that this is actually equals to x1 times x2, where x1 x2 are both binary. And how do you see this? I guess, the only way I can see this is just to try all the four combinations. If x1 x2 have the same sign, then this will become 0 and this will become 1-- if x1 or x2 are either both 1 or both minus 1. And that's is the case in x minus 2, the product is one. And if x1 x2 have different sign, then the first term is 0 and the second term becomes 1. Wait, why I'm having a half here? Oh, yes. And the second term becomes 2. So you multiply 1/2 and you get minus 1. Right. So good. So basically if you use neural network and you regularize, you can show that this is the solution finds, which is a very sparse combination of a small number of features. In some sense, when you use regularization, you find these four features and you do a linear combination on them. So these four features are the right features for this task. However, if you use NTK-- suppose you use NTK. Suppose you're using NTK, then what you do is you just don't learn any features. You just try to do a-- you do a dense combination of your existing features. All right. So in some sense, what you do is you say-- I guess, how do I say this in the best way? So basically when you do NTK, what you will earn is something like-- oh sorry. Why am I not-- I guess I can still see them. But I think roughly the intuition is that your y will be-- your prediction will be something like a sum of ai sigma wi transpose x or maybe something like this-- sum of ai times phi of x. Right? And there are a bunch of features, and each feature is phi i. And its features use all the dimensions. And this depends. Of course, exactly what the features are, it depends on what kernels you are using. If you use NTK kernel, you're going to get some feature vector. If you use a random kernel, you get some other feature-- some other features. But whatever features you do, this is always the function of all the theta. You cannot specialize to a special subset of features. And also because you are doing a regularization on the minimum l2 norm solution for the coefficient in front of the feature, you don't prefer any sparse solution. All right. So recall that-- Yeah, sorry. I'm using the wrong version of notes. So I guess I have to improvise a little bit. So if you look at NTK, what you do is you try to minimize the l2 norm of this vector a such that the data, sum of ai phi i of x is equal to y. Maybe you also have j. Right? And if you do the neural network, I think we have claimed that neural network is the same as l1 SVM in a kernel space. So then the corresponding thing would be minimized to l1 norm of a such that the sum of ai phi i x t j t is equals to y jt. So in some sense, when you do the neural networks and you have a lot of features, you're choosing this subset of features-- a sparse subset of features. And when we do NTK, you are minimizing the l2 norm, and that never gives you sparse combinations. It's actually a preferred dense combination. It's the reverse direction. You want a smooth combinations of the existing features as possible. So that's why you have to pay more samples too if you use the NTK, because you are using kind of suboptimal features. And this can be proved in this case. Like you can prove that this is a theorem, where you can prove that kernel method with NTK kernel requires n to be omega of d squared samples to learn a problem with error less than 1. And in contrast, regularized network only need-- neural net only need n is equals to O of d samples. Any questions about this? I think this part is a little bit kind of like hand-wavy because I didn't want to go into all the details. And also, this depends a little bit on what we discussed in the past, right? So what I want is that we have some connections onto the-- a connection between l1 SVM networks. Any questions? So I guess maybe just to wrap up this once again-- so basically, if you do neural network with regularization, then we have shown that this is equivalent to l1 SVM in a feature space. We are trying to find the sparsest combination of features that face our data. Right? And in this particular example, this can sound pretty intuitive that finding a sparse combination is useful, because not all the features are equally useful. So these features we designed are much better features than a random feature. And that's why neural network with regularization could have good sample complexity. And on the other hand, when we do NTK kernel on most of the other kernels, so you are not trying to find a sparse combination of the features. You are trying to find a dense combination of the features, because you are doing l2-- you are finding the minimal l2 norm solution. And each of the features is a function of all the data-- all the coordinates of the data point. So the features are not that useful in some sense. There are a lot of noise on features. You have to rely on averaging all the noise over multiple features to learn something. You can still learn something, but it's going to be less efficient. Right. I think that's the summary. OK. So if there's no any other questions, I'm going to move on to the next topic, which is about implicit regularization effect. I'm not sure whether you still remember what we discussed in the mystery of the deep learning theory section. So I'm going to briefly repeat kind of the high-level goal here. So the observation we had about the empirical deep learning is that we found that there are multiple global minimum of training loss exist and the optimizers has some preference-- have some implicit preferences. And we have claimed that almost every aspect of the optimizers have some preferences. For example, if you use the particular initialization that enables NTK, then you have the NTK preferences. You are learning the NTK solution. And if you use some other initialization you have some other preferences, and we have kind of concluded that if the NTK solution is the wrong preference-- so like you don't do much beyond the kernel method, you actually do exactly the same as kernel method-- so basically that means you are finding the wrong global minimum that doesn't necessarily generalize as well as other global minimum. So from now on, we're going to try to look at other global minimum of this objective and see what other optimizers prefer. So if you use different optimizers, you may prefer a solution that is different from the NTK solution. [INAUDIBLE] NT-- oh yeah. What does it mean-- [INAUDIBLE] Right. So why I call it NTK initialization, right? So the NTK initialization basically mean the initialization under which you can prove the NTK result. So maybe specifically, I think last time we have two examples, right? So one example is-- maybe this example is-- [INAUDIBLE] Right, right, right. [INAUDIBLE] Right, right, right. So for example, I think, just when have to weight things. So like where there is-- something like here, I think we have this overparameterized model. We have this width. And we initialize with ai to be plus 1 minus 1. And wi to be this spherical Gaussian. And you can, for example, initialize something with something much smaller. Right? And actually, you should if you really do the experiments for this exact parameterization. You should initialize both either ai or wi's on some like a square root n factor smaller. And then you're going to see much different empirical results. And actually, we have done this in the-- it's actually in the paper. Many people have done this. It's relatively simple experiments. So here, you can already say that initialization is the culprit, right? So for the other case, I think when you change the parameterization to see the NTK regime, I think you can say that the parameterization is the culprit. And also even in this case-- even this case is supposed to initialize the same as the NTK. Suppose you do stochastic within this set. You have sufficiently large stochastic-- it doesn't have to be super large. But a little bit larger than zero, then you will leave that initialization. And you're going to convert to some other places. So that's another way to leave the NTK regime. All right. So we are going to sometimes discuss this other kind of ways. Basically, what we will discuss next are either using [INAUDIBLE] to leave NTK or use stochasticity. And what else? You can also use the learning rate. Learning rate is kind of almost the same as stochasticity, because if you have larger than the rate, in some sense-- and you have SGD then your stochasticity is bigger. Right. Right. So the first thing I'm going to do is the effective from the implicit regularization effect from initialization. So first is this effect of initialization. And you will see that in certain cases, you can leave-- we don't necessarily really care about leaving NTK, we really care about having a better generalization rate. So that means you have to leave NTK, but you have to do probably more than that to get better generalization. So this is what we're going to do the next 15 minutes of this lecture, and the next lecture, the effect of regularization. And I'm going to start with a simple case where you have overparameterized linear regression case. You need overparameterization because you need, especially if you consider linear models, right? One of the important thing is that you have to have multiple global minimum otherwise there's no so-called, implicit regularization effect, right? Because optimizers have to converge to a global minimum. You have to have multiple global minimums so that the optimizers can have a choice to choose between them. So that's why we need overparameterized regression, so that you have multiple global minimum. Actually, there is an infinite above global minimum where you have overparameterization. And we will see that in this case small initialization prefers a low-norm solution. And this is also the case when we in the next lecture we're going to go beyond linear model. And the high level conclusion is the same. If you use small initialization then you prefer lower-norm solution. And today we're only going to in the next 15 minutes we're only going to do the linear models. And this is actually not that hard. So let's set up first. So this is a standard linear regression case. For this lecture, I'm using the lower subscript for the number of examples just because you should really look up any linear regression book, then they use subscript for examples. And here we don't do this. So each of these xi's are examples. Example i, and you put them into a matrix. And let's assume x is full rank So it means rank n. And let's also assume n is much smaller than d, OK? And we have a parameter beta. So you get a loss function y vec minus x beta 2 non square. As you have 1/2 here just for convenience. This is my empirical loss, OK? So this is standard linear regression. And indeed L hat beta has infinite number of global min. And you actually can characterize exactly what that global mins are. And all the global mins with loss 0. So what are the global mins? So beta is equals to, supposed to take beta to be x pseudoinverse times y vec plus some vector zeta where zeta is any vector that is orthonormal, also orthogonal to x1 up to xn, is a global min. So as your beta has this form, then it's a global min. And these are actually all the global min. And actually here, I think last time someone asked about pseudoinverse. Maybe let me have quickly some basic properties of pseudoinverse. I guess my way of thinking about it is probably slightly different from Wikipedia. So the way I always think about pseudoinverse is the following. So I always think about it when in SVD. Because with the SVD it can verify everything so that I don't have to remember them. So suppose you have a matrix x in dimension n by d. And suppose x is of rank r, of course, r has to be less than either n, both less than n and less than d. So the way how I remember every property of the pseudoinverse is the following. So I consider SVD of x, which is u sigma v-transpose. And sigma is of dimension r by r. Let's say you ignore all the non-zero entries. And u is of dimension n by r. And v is of dimension d by r. So then you know that the column-span of u is the same as the column-span of x. And the column-span of v because there's a transpose here, right? So column-span of v is the row-span of x. And also you know that the pseudoinverse in this notation you can think of as defined to be v sigma inverse u-transpose. So here sigma is a diagonal matrix with entry sigma 1 up to sigma r. And sigma i's are all positive. So then this inverse is well defined. And x pseudoinverse is just this, v sigma inverse u-transpose. And now if you want to understand what's the property of the pseudoinverse, you can verify it yourself. So you know x pseudoinverse is going to be what? It's going to be u sigma v-transpose v sigma inverse u-transpose. v-transpose v is an entity. So u u-transpose, right? Because v-transpose v is an entity. Sigma times sigma inverse is an entity. So what is? This is the projection to the column-span of x. It's the projection to the column-span of u. And the column-span for u is the same as the column-span of x. And x pseudoinverse times x, this is equal if you do the same calculation, it's going to be equal to v sigma inverse u transpose times u sigma v transpose. Which is v v-transpose, which is the projection to the row-span of x. And you can also see the dimension matches because this is a matrix of dimension d by d. And the rows of v is in dimension d. Sorry, the rows of x is in dimension d. The column of v is in dimension d. So in this case where x is in n by d, and the rank is n then you know that x pseudoinverse is the projection to the column-span of x. And the column-span of x now is the full span of all the vectors. But column span is what? The column-span is the span of all the vectors. So that's why this is just an entity. And x pseudoinverse x, this is the projection of row-span of x. So how many rows there are. There are n-th rows of x. And they don't span everything because the dimension d is bigger. So you cannot span everything. So this is why this is not an entity. This is really just the projection of the row-span of x. You cannot simplify more, right? So it's a little bit too long as a building block, but OK. I hope this helps. This is how I understand pseudoinverse. I never remember what x x-pseudoinverse is equal to. So this is how I remember. So now we have so many global minima, right? I think with this it's easy to verify these are global minimum. Because you can verify this beta is global minimum because you can say take x beta, which is equal to x x-pseudoinverse y work plus x zeta. The zeta is orthogonal to the rows of x. So that's where x beta is 0. So you got x x-pseudoinverse y vec and x x-pseudoinverse in this case is an entity, so you get y vec, right? I just claim that x x-pseudoinverse is an entity. OK. So that's why x beta is equal to y vec. That's why it's a global minimum. And the question is which global minimum you're going to converge to. So the theorem is that if you initialize the grid in descent L hat beta with initialization beta 0 is equal to 0 and sufficiently smaller in rate. And the rate, actually you know exactly how small it is. I just don't want to give you too much jargon. So if you have lineal is small enough, and the initialization is 0, then this converges to the minimum-norm solution. So the minimum-norm solution beta hat is defined to be among all global minimum where the minimum-norm solution among all global minimum of the loss function. So basically you get this 2 norm for free, right? You are minimizing this, you didn't say that I want to have the minimum-norm solution. You just say I want to do gradient descent. But you get the minimum-norm solution for free. And the reason why you got it is because you express your implicit preferences through the initialization. OK, cool. So yeah, I think I have 5 minutes, which is perfect for the proof sketch. So I guess this is actually really a proof. But I think I ignored some small details. That's why I call you a sketch. So the first step is that if you do standard convex organization you know that this goes to 0, as t goes to infinity. You know that if you run for a long time, then your loss will become 0. I'm not going to show how do you do this. But you can invoke any off the shelf optimization results. And the second thing is that you know that the speed ahead is actually equal to x x-pseudoinverse times y vec, right? So we know that all of these are global minimum. But if you take zeta to be 0 then that's the-- Yes, if you take zeta to be 0, then that's the minimum-norm solution. I think this is this. There is no x here. And this can also be simply verified. So for any zeta orthogonal to x1 up to xn, then you look at x-pseudoinverse y work plus zeta, the 2 norm of this. This is equal to x-pseudoinverse y vec 2 norm, plus zeta 2 norm, plus 2 times x-pseudoinverse y vec zeta. And this is larger than x-pseudoinverse y vec 2 norm squared plus 0, because the norm is less than 0. And this quantity is just equal to 0. Why this is equal to 0? This is equals to 0 because I guess this is maybe let's say this is equal to 2 times y vec x. So what's this? This is zeta transpose x-pseudoinverse y vec, right? And I claim that this is equal to x-pseudoinverse y vec 2 norm because this is 0. And why this is 0? I guess this is actually a good way to practice what I had. So x-pseudoinverse is v sigma inverse u-transpose. Sorry. U-pseudoinverse-inverse is v sigma inverse u-transpose. So the column-span of x-pseudoinverse is the same as the row-span of x. And zeta-- sorry, this is transpose, not this. Zeta is orthogonal to the rows of x. So which means that zeta is orthogonal to the column of x-pseudoinverse, right? So that's why zeta times this is the zeta times the columns of the pseudoinverse. So that's why everything is 0, right? So this is 0. So zeta is orthogonal to column-span of x-pseudoinverse which is equal to the row-span of x. So basically you see that in the norm is only decreasing if you set a zeta to be 0. That's why when zeta is 0 that's the minimum-norm solution. Right, so 3, I guess-- 1 and 2 are basic facts about this linear regression thing. 3 is what's really about the optimization so you can prove that beta t is in a span of x1 up to xn. You can prove this inductively. And why this is the case? This is a super simple induction because beta t is equals beta t plus 1 is equals to beta t minus eta times the gradient and beta t. And what's the gradient? The gradient is x-transpose y vec minus x beta. So this is in the column-span of x-transpose. And it is also in the row span of x. So basically your update is always in the row-span of x. That's why you never leave this span, right? Maybe I should start with the beta 0 is in this span. Beta 0 is in this span of x1 up to xn. And each time you update, and update is in the span x1 up to xn. So by induction you get that you're always in this span. So basically then, so for-- Because you're always in your span, right, the only solution to L hat beta 0 in this span is this, right? Because what are the solutions with error 0? The solutions with error 0 are these ones. So this has solution 0. And among this who are in the row-span of x, right? So only this one is in the row-span of x because all of these are not in the row-span of x. They are orthogonal to the row-span of x. So the only solution that is in the row-span of x is just the first term. You just take the first term. And that happens to be the minimum-norm solution. And that's why you get the minimum-norm solution. And sometimes basically all the magic come from this, right? So basically this is a regularization sometimes. This is a constraint imposed by the algorithm. The algorithm say that you cannot go everywhere. The algorithm say you can only go to those places who are in the span of the data. So that's why you have to stay in the span of the data. And it happens that in the span of the data there is only one solution. And that solution is the minimum-norm solution. So I think in some sense the if I draw a picture-- OK, I guess I'm running late, but real quick. So if I draw a picture, I think this is a very difficult picture to draw. But I think you can still try it. So you can have maybe I say this is blue direction, this is the span of the data. Let's suppose the span has one data. So the direction of the span of the data is only one dimensional. And then you have a subspace of solutions which are orthogonal. So this is orthogonal here to the span. This is the solution where you have this, all right? So it's orthogonal to the span of the data and the intersection part is the target solution. So the intersection part is really x-pseudoinverse y vec. So you start with this. You try to reach this purple plane, because that's what the optimization wants to do. The optimization wants to reach the purple plane. But optimization also say you can only go in the blue direction. And so that's why you meet in the intersection. And the intersection is the closest point to the origin. OK, I guess that's, yeah. [INAUDIBLE] Yeah. So do you need a condition that you have to span on a span, right? So, yes, you do. Because if you don't span, suppose for example, you're supposed to start here. So what happens is that you can only move in this direction. That's what the algorithm says. But the algorithm says the update is in a span. So all your changes is in the span. So basically you go in this way, and you hit here. So then this place is not the minimum-norm solution anymore. This place is going to have some higher norm than the ideal point. [INAUDIBLE] Yes. So you can say the implicit regularization effect always happens. But the effect is the minimum-norm solution only if your initialization is 0. You always have your preferences. So whatever you do with initialization you have some preferences about which global minima you want to converge to, right? And if you want the preferences to be the minimum-norm solution, then you really have to choose 0 as the initialization. Any the other questions? [INAUDIBLE] So the question is whether there's any hope that this can transfer to nonlinear cases. I think here we are using a lot of things about linear algebra. So we know what is the minimum-norm solution. And so far we have the orthogonality, everything, right? So when you have nonlinearity you don't have most of this. So those parts that we discussed about the very heavily linear algebraic part, those don't transfer at all probably. But somehow at least we can find one other situation where when you have nonlinear models you still prefer the minimum-norm solution. And that's next lecture. But the mechanism is not exactly the same. So next lecture and this lecture, the only connection is that the final message is the same. But the techniques are quite different. We still don't know how to unify them in the right way. [INAUDIBLE] Right, right. [INAUDIBLE] Yeah, yeah. So you are absolutely right. So like the difficult case come from the very, very small linear rate case. I think infinitesimal small linear rate case. So even for infinitesimal small linear rate. So basically you have a differential equation, right? So you just have a trajectory. And you want to know where the trajectory goes, right? I don't know too much about differential equations, but I think the problem is how to solve that equation. You know solution exists. You know there's a trajectory. But where this trajectory really goes, that's the hard part. I don't think we, at least I'm not aware of any papers that use tools from differential equations heavily. So this is a useful language, right? The formulation of the language perspective, the differential equations language are very useful. But typically the hard part is, how do you solve it? [INAUDIBLE] In some cases, you can. I know one paper where you can solve it. It's using the structure of the problem. You have to literally solve it using some new math. It's not like you can invoke a theorem in the differential equation literature saying these kind of questions can all be solved. I don't think so. OK, sounds great. OK, cool. See you next week. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_10_Generalization_bounds_for_deep_nets.txt | So last time we have talked about covering number. So covering number is a upper bound for the Rademacher complexity. And then our goal is to bound covering numbers because this is a new tool for bounding the Rademacher complexity. And we have discussed what other bounds are linear models-- I didn't show any of the proofs, but there are some existing bounds which are 20 years old actually. And then we also talk about the Lipschitz composition landmark for covering numbers, which is much easier than the corresponding level of a Rademacher complexity. So basically, if you know a function class has good covering number bounds and then you compose it with the Lipschitz function, then you still have a reasonable covering number bounds. So that's the general idea. And then, today we're going to talk about deep neural networks. And we are going to use some of these tools because you can see that a deep net is actually composed of multiple linear models with some of the Lipschitz functions, right-- their activations. So this is the goal of this lecture. So let me set up-- actually sorry, give me one moment. I think I probably have to change the mask because I'm always having the fog. I don't know what happens with this mask. Let's change one. Maybe there's some deficiency with the mask. OK, let's continue. So we have a neural network. So the setup is that we have some neural network that's called h theta. Theta is used to denote a set of parameters and we have r layers. So the network looks like this. So that last layer we don't have any activation, and then you have some activation in the next layer, you get r minus 1. Something like this. So basically if you do the ordering of the math formula-- so you first multiply x with W1 and then you pass through a nonlinearity and then you multiply W2-- and you do this so and so forth, and you have r layers. This is the network. So there are r layers and wi are the weights. And the kind of bound that we're going to talk about is that-- so here is the theorem. Assume xi 2 norm is less than c and consider a family of networks is that h theta with some norm control of the weights. So we consider the operator norm of the weights to be kappa i-- kappa i. And we can show the 2 to 1 norm of Wi transpose to be bi. And then suppose you control your complex function class like this, then the Rademacher complexity will be less than-- up to a constant factor c over square root n times the product of kappa i times the sum of-- this is a complex formula. Let me explain it in a moment-- i from 1 to r 3 over 2. Alternatively as a corollary, I guess this is not necessarily that formal because you have to talk about what exactly this is. You have to have some failure probabilities. But roughly speaking, you are saying that the generalization error is less than O tilde 1 over the margin times 1 over square root of n times c sometimes the product of the operator norm times the [INAUDIBLE].. I guess here I'm using the-- times the norm. This is a little bit-- anyway, so basically the important thing here is that the complexity measure depends on few things. One thing is that operator norm of the weight matrix. And it depends on the operator norm as a product. So in the complexity term, it shows the product of the operator norm of all the weights shows up in the complexity. And also there is this term, which basically you can think of this as a polynomial in kappa i and bi. So this you can in some sense think of this as a polynomial of kappa i and bi, which is not really important. So as long as it's polynomial for us, it's not that important because the product of the operator norm probably will be the dominating term. And the polynomial in bi the kappa i probably are somewhat-- and it will be relatively small. So we don't necessarily have to care about exactly what this 2/3 means. Actually, they don't have any special meaning. It's really just something that comes out of the proof. But as long there are polynomials, we are relatively happy with it. And so basically this is the important term. And this term, if you look at the bound, it comes from the Lipschitzness of the model. So kappa i is the bound on the Lipschitzness of a single layer. And the product of kappa i is the bound on Lipschitzness of the product of all layers. So without any details, I think this term, you can imagine this comes from some Lipschitzness composition, some use of Lipschitz composition. What is the thing right above the x in the expression [INAUDIBLE]? This is assumption? Sorry. Just the symbol that you wrote right above the x. This is i. Oh, you assume that it's true for every i. I think this can be relaxed a little bit. But, again, it's not very important. So you can maybe relax it to be the average of xi is less than c. It's not super important. What is the operator norm? Oh, right, so that's a-- yeah, sorry. So the operator norm is the-- I guess maybe I didn't-- so this is also the spectral norm, also the largest single-- so this is the spectral norm of the largest singular value, if any of this makes sense to you. And also the formal definition is just that the max over x 2. So I guess I called operating norm just because this is the-- if you think about w as operator, then this is saying that, how does this operator change your norm, right? So if you give it a 2 norm vector, then how does it change the norm? Yeah. So, OK, cool. So and you can see that this is kind of like a-- Lipschitzness, this is also-- maybe I should expand this a little bit. So this is also about the Lipschitzness of this, the linear model wx, right, because if you care about the Lipschitzness, what you have to verify? You have to verify that wx minus wy is less than some constant times x minus y. And what that constant should be, so if you prove inequality, then you're going to get the operator norm, a spectral norm of w there. So that's why this corresponds to the Lipschitzness of the linear model. Any other questions? OK, cool, so by the way, I haven't got any questions from Zoom for a long time. So you should feel free to ask questions. You don't have to, but of course feel free to unmute yourself. OK, so how do we prove this? So the fundamental idea-- yeah, so in the next 30 minutes, we're going to talk about this proof. The fundamental idea is that you somewhat cover function set iteratively, so cover this set of functions f iteratively. And iteratively means that you cover more and more layers gradually. And how do you do this iteratively? You have to use the Lipschitzness and sometimes the Lipschitz composition lemma that we have discussed. And also you want to also control-- and also controlling how the error propagates. So that's a high-level summary. It's kind of abstract, but let me tap into the details. So for simplicity, let's also kind of try to abstractify-- so for each layer of f, f as fi. So, basically, fi corresponds to a linear multiplication, a matrix multiplication plus an activation layer. All right, so this is a one layer. And then you can write f. Then you can consider f as this composition of fr with fr minus 1, so and so forth, right? So basically for every layer you have certain choices. You can choose your weight matrix. And then you compose all of this function class. By this composition I guess we have used this notation multiple times. This is really just means that you are looking at fr composed with fr minus 1 composed f1, where each fi is from the family of capital Fi. So this abstraction will allow us to have much cleaner notations. But, fundamentally, you can usually just think of each of the fi's as a layer, right? And what we know is that-- so suppose maybe, let's say-- so for the sake of preparation, so suppose for every f in fi, fi-- in f beta fi in capital Fi, fi is kappa i Lipschitz. This is actually the case for us because we restricted the spectral norm of-- or the operating norm of each of the wi's to be less than kappa i. That means that every layer is kappa i Lipschitz. And the ReLU is 1 Lipschitz. So even you compose with activation, it's still kappa i Lipschitz. So suppose each of these functions is kappa i Lipschitz. Then you know that fi(x). So these are just some preparations-- so 2 norm is less than kappa i x minus y 2 norm. And maybe that's just for simplicity suppose f is 0, is equal to 0. This is also the case, in the real case we care about where you have a neural network. And also let's suppose that xi is less than C. This is also our assumption. So then with all of this, then you know that-- we know a bunch of basic things. So, for example, we know that you can bound, what's this? What's the multilayer application of xi. What's the normal-- what's the boundary on the norm here. So the boundary on the norm can be bounded by-- each time you at most capture a kappa i factor. So you get kappa i times kappa i minus 1 times kappa i minus 2, so and so forth, times kappa 1 times c, right? And we call this ci. And let's define this to be equals to Ci. So basically this is some basic kind of preparation. So under this abstraction, you know some bound on each of the layer. And you know each of the layer is Lipschitz. And what I'm going to do is that we're going to do two things, so for two steps. So, first, you control the cover number of each layer, of each layer. And second, you have a combination lemma, you compose this, like combine them together. So you have a lemma that turns each layer, so you turn each of the layers. So you have a lemma that turns single-layer covering the number bound to multiple layer, multiple layers. And I think number two is the-- number one is kind of easy because for number 1 this is just a linear model composed with Lipschitz activation. By Lipschitz activation, you can just invoke on what we have discussed last time. So, basically, the important thing is that, how do you turn a single-layer covering number bound into multiple layer covering number bound? That's basically the main thing I'm going to discuss. So let's call this-- there is a lemma that does this. So under the assumption setup above, and kind of the relatively abstract setup above, so assume that-- suppose you assume for every inputs with l2 norm less than ci minus 1-- so these inputs are used to define the-- used to define Pn, right, and L Pn, the metric L2 Pn. So this is the inputs vary for which we are evaluating your covering number. So to define covering number, you have to define the metric, define which empirical inputs you're evaluating on. All right, so I'm assuming that for every input of this norm constraint you have a covering number. You have a covering number bound. So you know that epsilon i fi L2 Pn is less than some function of this, and Ci minus 1-- some function of the norm and some function of the target radius. So this is just assumption. This is assuming that-- so basically this is assuming that you have a single-layer bound, single-layer bound. So suppose you have a single-layer covering number bound of this form. And you do have this bound, it's just I didn't give you the exact formula, right? So if you instantiate on a linear model, you are going to get something like this. This will be something like ci minus 1 squared over epsilon square, the norm of the input squared over epsilon squared, right? So that would be what happens when you have linear models. But suppose you have this single-layer covering number bound, then the conclusion is that you can turn this into a multilayer covering number bound. And the form of this translation is not very clean. But it's like this. So there exists the epsilon cover of Fr composed up to F1 for epsilon is equal to the following thing. [INAUDIBLE] Sorry, one moment, let me finish, OK. What's the symbol on the right? It's just above epsilon i and ci minus 1? Sorry, can you say that again? In any expression of y, it is less than something? Sure. What's that symbol? This is g. Epsilon plus g. Yeah, so I'm assuming a generic thing here. But actually you can-- this is for the abstraction. When you really use it for linear models, it's going to be something like ci minus 1 squared over epsilon y squared. So this is g. So there is exists an epsilon cover such that-- with this size such that the log size is bounded by-- the log size of this cover is bounded by sum of g of epsilon i ci minus 1 and i from 1 to r. So basically if you have a log covering number bound of this form for every layer, then you can have a log covering number bound for this thing. And the bound, just the log covering number just add up a sum. But the tricky thing is that it's not like-- the cover size also grows. So the cover size also adds up in some way, which is a little bit kind of complicated. So, basically, your cover size is like multiplied, it's added in some way where you also modify some of this kappas, which are Lipschitzness in some sense. And your covering number is also added in somewhere. So this is the fundamental mechanism for us to turn a single-layer bound to multiple layer bound. Of course I'm going to use this in some way at the end so that we get a final result because you have to choose what epsilon i's are, right? So eventually what you do is that you are going to choose epsilon i's so that you get the desired target radius. And you work out what exactly this formula should be for that particular choice of epsilon. Does that make sense so far? But before doing that, I'm going to first prove this lemma. And then I'm going to do the derivations. So after you have this, this is the core. After that, this is just a choose parameter. So you just choose epsilons in some way that is in favor of you and work out what is the final bound. OK. And in some sense, the interpretation of this lemma is that you somehow-- you can add up the covering number bound, the log covering number bound in this way, as long as you pay some additional radius. OK. So this proof is in some sense, in some sense it's actually pretty simple. But the exposition requires some challenge. It's a little bit challenging. So the fundamental idea is the following. So we start with this data point. We start with this concatenation of n data points, right? So you have n data points. And you map these n data points to a set of points, right? This is the Q that we talk about. I think I need to draw this in a good way so that I have more space. So let's start with-- you start with n points. And you map these n points into a vector of dimension n or maybe actually it's a matrix of dimension n. So you map this to some space. And each of these point here is the concatenation of f x1 up to f xn. And this is the so-called Q, the set of Q, right, that we have to cover, right? And you can use multiple functions f, or you can use f-- any function f1, in f1 to map to a different point. If you choose different f1's, you're going to map a different point. And if you just have one layer, what you're going to do is that you're going to cover this set Q, right? That's what we do for covering number for one function, for one family of functions f1, right? So then what you do is that you-- I'm just basically reviewing what we have done for covering numbers for one family of functions. You create this kind of bubbles, so that covers it. So basically you create these centers. And these are points that are in c. Maybe that's called c1. So let's create a-- that's a c1 is epsilon 1 cover of f1. This is what that means. And now we are going to see, how do we turn this into a cover for f1 composed with f2? So that's the job we are trying to do. And what really going on here is that for every point here in the output space of-- so this is-- maybe let's call this Q1, which is the output space of-- maybe let's call this thing capital X. So then Q1 is the family of outputs where the function has to be chosen from F1, right? What's the composed-- so how about if you add f to another layer? So what happens is that for every point in Q, Q1, you can apply multiple different functions, right? For any functions little f2 and capital F2, you can apply it to map to a new point in the new space, to map it to a new point. So you get a-- for every point here, you get a bunch of possible outputs. And for every point here, you get another bunch of possible outputs, all right? So each of these new points could be your image after applying two layers, right? So now we are trying to apply-- we're trying to cover this new set of outputs, like Q2 let's say. And how do how do we cover it? So the approach that we are going to take is somewhat, in some sense pretty brute force. What you do is you say you want to leverage the existing power for capital F1 in some way. So what you do is you say, you look at a center here in c1. And you look at what are the image of this point after applying a second layer. So you get something like this, all right? So this is the set of the image, of the output of this point. So maybe let's say, suppose this point is called f1 x, which is in c1. Let's call this f1 prime x, which is in c1. And then you look at all the outputs from f1 prime x. So you get this family of points where you apply f2 on f1 prime x, and where f2 can be chosen arbitrarily from f2. And now what we do is that we cover this set by a new epsilon cover. So what you do is you say, I'm going to cover this with a bunch of things. And what does that mean? That really means that you choose a subset of capital F2 and cover-- because here you are ranging over all possible functions in F2. So if you're going to choose a cover point, you just say I'm going to drop some of them. I choose a subset, a discretization of capital F2. So that's basically the approach. And you do this and then you do this for every possible point in c1 and cover them. So basically suppose you have another point in c1 here. And then you look at all of this image and you do another set of cover. And you do this for every possible point in c1. And every possible point in c1 have induceD a set. And that set can induce a cover. And then you'd take the union of all of these variables into-- and that the union of all of these red bubbles becomes a cover for the Q2. For example, suppose you have, let's say, f1 prime prime X here. maybe. I should use a consistent color. Let's say you have a f1 prime prime X. And this is mapped to this set of points here. This is the set of all f2, f1, prime prime X where f2 is in capital F2. And then create a cover for this set so that you can discretize F2 again. And you take the union of all of this right cover, of these red bubbles as your cover for Q2. So any questions so far? So formulate, what we should do is the following. So epsilon 1 and epsilon r are the radius on each layer. These are just-- which are TBD. We will choose this in-- I guess in this lab, they are not TBD. They are just already given to you. Eventually you'll choose something or choose some numbers for them. And then what you do is that c1 is the epsilon cover, epsilon 1 cover of F1. That's easy. And then you say that for every f1 prime in c1, construct this c2, a cover in the second space. But this cover depends on f1 prime. So to cover, to epsilon 2 cover the set f2 composed with f1 prime which is what I wrote above where this is f2, f1 capital X. f2 is ranging in capital F2, all right? So for every set like this where this set is this-- this set is really literally this blue things I drew here, like this blue set. And I choose a cover. And I denote that cover to be c2, f1 prime because this cover depends on f1 prime. And then I'm going to take the union. So I'm going to let c2 to be the union of all of the c2, f1 prime, where f1 prime is in c1. So this is how do I construct the cover for the second layer? So this is supposed to be a cover, supposed to be a cover for capital F2 composed with capital F1. OK, any questions so far? So there are several questions. So one question is that, how good this cover is, right, that's one thing. And the other thing is that how large this cover is. So the size of this cover is relatively easy to compute because you are basically just blowing up the size multiplicatively because for every one in the-- in the c1 you create this cover. So you just multiply basically the cover numbers together, in some sense. And that's easy because you can formulate what you can do is that you can say c2, f1 prime, this is something, the log of this is going to be bounded by g of epsilon 1 c2-- or, sorry, epsilon 2, c2. But this is epsilon 2-- my bad. This is epsilon 2, c1 I think in my notation, epsilon 2 c1 below c1. This is my assumption because my assumption is that as long as your input is bounded by c1 and your cover size is epsilon 2, you have this bound, all right? So that means that the size of c2 is bounded by the size of c1 times this exponential of this g of epsilon 2, c1 because for every point in c1 you have a bound for that corresponding set. So then you just multiply by c1. And that means the log of c2 is less than the log of c1 plus g of epsilon 2, c1, which is equals to g of epsilon 1, c0 plus g of epsilon 2, c1. Actually, I forgot to define c0. c0, just for convenience let's define, my bad, so define c0 to be just the c, the bound on input. So ci's are the bounds on the layers, the activation layers. And c0 is the bound on the input. OK. So basically, the size is added up. The log size is added up. That's easy. And we're going to deal with the covering, how does the covering works at the end. So before doing the covering, completing the covering radius, let's define how to proceed with more layers. So, similarly, for given ck, suppose you have covered k layers. Then now you're constructing a cover for the k plus 1 layer. So what you do is say that, so for any fk prime composed with fk minus 1 prime, f1 prime in ck, you construct some ck plus 1, which is a function of this fk up to f1 prime so that epsilon k plus 1 covers the set fk plus 1 composed with fk prime. So I knew like this ck, the final cover to be the union of all of these kind of sets. And, similarly, you can prove that the log of the ck plus 1 will be less than the sum of all the single-layer covers, epsilon k plus 1, ck plus up to g of epsilon 1 c0. All right, so I've shown you how to cover it. It's just an iterative cover that's kind of pretty brute force, in some sense. And now the question is, why this is a good cover? What's the radius? So basically when we answer the question, right, so for every fr composed up to f1, which belongs to this fr, this set, this is the set we want to cover. So you pick a function in the set. And you want to say that this can be represented by something in a cover with some small distances. So how does that work? So you first, let's say that you know there exists 1 prime in c1 such that rho of f1, f prime, this is less than epsilon 1. That's something you know because c1 is a cover, epsilon 1 cover of the capital f1. So now you have to say, let's say you try to pick something in c2 that can cover f2 composed with f1. How do you do that? You basically in some sense use the construction. You say that-- maybe I should draw this a little bit more. So the first thing is, suppose you have a function here, or you have a point here, which is f1 of x. So you cover it by this point. How do I do this-- as you cover this by this point, right? So now suppose you have a point in the second layer. Suppose you have a point somewhere here, which is the map which is computed from that f1, x, right? You apply some f2 to it. And what you do is you say that you first look at the neighbors in the first layer. So you got this point. And this point, you look at what's the neighboring-- what's the image of this point in the second layer, maybe something here. So I guess here you are applying-- I guess, let's assume you're applying f2 here. So you get f2 here. You use the same f2 on the cover and you get this point. And then after you get this point, you look at the neighbors in the right. So you got this one. So basically this will be the cover for the purple point. I'm not sure whether this makes sense. Sounds good? So, in other word-- more formally, so basically you say that-- so you want to say there exists a function in this cover, right, in this c2, f1 prime. So this is this one. This one is that right point I think. This is that right point. So it's in this cover such that rho of f2, f1 prime is closed to what? Is closed to this-- oh, the blue point. What's the blue point? The blue point is f2 composed with f1 prime. This is less than epsilon 2, all right. I guess maybe let's write this as f2 prime composed with f1 prime so that just to make it look-- that's also what my cover is doing. So suppose 0 of f2 prime composed with f1 prime. So your cover has this structure that you will first apply f1 in your cover, you use the f2 prime in a cover. So suppose-- what I'm seeing here-- OK, sorry, sorry, my bad. So you have this function in this cover such that-- so this is of the form, let's say, f2-- I want to make this too complicated. But I think let's say this is of the form f2 prime composed with f1 prime, which is in this cover. But this one actually implicitly depends on f1 prime as well. But let's ignore that notation. So you've got a rho of f2 prime composed with f1 prime, which is close to f2 composed with f1. But this point is not what you really want to cover because we want to cover f2 composed with f1. So what you care about is that rho of f2 prime composed with f prime, the difference between this and f2 composed with f1. So this is the thing you really care about. And you can see that there's still some differences because the differences come from that this is at 1 prime, but not at f1. So that's why you do a triangle inequality. You say that the target is less than rho of f2 prime composed with f1 prime. So use this as the intermediate term, right? So this one is less than epsilon 2. And you are left with this thing that where you only differ in the first layer, right? That's the difference. But this difference is kind of propagated. In some sense, if you look at this figure, this figure is a little bit kind of tricky. So this is the difference in the first layer. But once you apply this f2, right, so you have a bigger difference. So this is the difference in the second layer. And this difference can be like a blown up a little bit because even though you apply the same function, you may blow up the differences a little bit. So that's why you have to use the Lipschitzness to say that this is less than epsilon 2 plus kappa 2 times epsilon 1. kappa 2 times rho of f1 prime, f1. And this is less than epsilon 2 plus kappa 2 times epsilon 1. That's how you bound the covering, the radius for the second layer. Any questions? And then you can similarly do all of this for k. So there exists a function fk prime, which depends on f1 prime up to fk minus 1 prime. And in this set ck, let's write this as fk prime composed with k minus 1 prime composed with f1 prime, and such that this is a cover, such that the distance is less than epsilon k, less than epsilon k. That's the definition of the cover. And then you have to see why this is a good thing for the original one. Recall that this is not actually what you really care about. You care about the fk composed with fk minus 1 up to f1. You don't care about the prime. So you care about this. This is the thing that you really care about. It also shows this is small. And how you do this? You expand this into multiple terms. So you say that this is less than-- so I guess you-- the first thing is you first compare with-- this kind of telescoping sum is pretty actually useful in many cases. You first compare with this. And then you compare-- you just gradually peel off more and more terms. And this is low prime here. And, until, finally, you got-- everything is kind of low prime, basically. And, eventually, you get-- what you care, there is no prime at all. So this is just triangle inequality. And now, you bound each of these term. The first term, by definition, is less than epsilon K. And the second term you see that these two are the same. And this part is also the same. And the only differences come from the difference between this fK minus 1 and fK. The f prime K and-- f prime K minus 1 and f K minus 1. So because of the cover, that gives you epsilon K minus 1. And then you also have to blow up a little bit because of the fK composed on top of it. So you also have to pay a Lipschitzness of fK, which is kappa K. Kappa K. Sorry, my K and kappa looks probably almost the same. So, then you have epsilon K minus 2 times kappa K minus 1 times kappa K. So on and so forth, until you have-- In the last time, the only difference comes from the first one. So you pay epsilon 1, because they are epsilon cover. f1 prime is from the epsilon cover. And then you pay a lot of Lipschitzness, like kappa K, times kappa K minus 1, up to kappa 2. And if you take K to be R, then you get the eventual thing, right? So the eventual theorem is that your radius for the final covering is something like-- Where is it? The radius for the final covering is something like this, right? So that's eventually what you got. Any questions? I'm still a little unsatisfied with having to add epsilon [INAUDIBLE] to our examples. [INAUDIBLE] which is commonly assumed. And then we're mapping the [INAUDIBLE].. The sets that we've covered. For example, why isn't epsilon 1 times-- appears to be kappa 2, kappa 3, kappa 4 up to kappa k. Why won't that cover it? Right. So I guess the question is that why-- Why you only have to-- Why don't you only at proof require this? This is because-- no. Suppose-- let me try whether this works. Suppose your function class is at f1 composed with a fixed function. Maybe it's the other side. So f1 composed with a fixed function that is called f2, maybe, composed with f3. So and so forth, fr. And all of these are fixed. Then you only have this term. But you also have to cover the possibilities for the second layer and the third layer, so and so forth. So that's why you have to pay the other things. OK, cool. So, now, we are done with this lemma. And now let's go back to the proof of the theorem. And a proof of the theorem, as I kind of alluded before, is pretty much just the kind of annoying speculation in some sense. There is a way to do the calculation in a simpler way, but I'm going to first show you a zero knowledge proof. So, basically, I'm just going to tell you that I'm going to choose my epsilon to do this, and it just works out. And then I'm going to show you some way to kind of-- at least what I would do with this. If I write a paper, I'm going to show you the first proof I'm going to show, which is just choosing some epsilon y. So let's start with that. So, basically, everything is about choosing epsilon y, right? So you first-- of course, you first know that this O is equal to O tilde of Ci minus 1 squared, bi squared over epsilon y squared, because this is linear model. This is linear model. Composed with one Lipschitz function. Right? So recall that, each of the Fi is a linear model composed with the one Lipschitz function, a fixed one Lipschitz function. And for the linear model, the covering number, the log covering number, is supposed to be something like the norm of the input. And this is the-- Bi is the Wi transposed 2 to 1 norm, right? So the norm of the parameter, and divided by the radius. This is what we have shown last time, like, we didn't prove this, but this is the lemma we had last time, about the log covering number of the linear models, right? So we plug in this, and then, basically-- So, basically, you have two quantities. So one is the log covering size, which is the sum of Ci minus 1 square, Bi square over epsilon i square. And also, you have another thing, which is the radius, which is sum of epsilon i. I'm writing this as epsilon kappa. i plus 1, up to kappa r. From 1 to r. Right. So you basically have these two things that you want to trade off. You want to find the balance dependencies between them. You want to make the log common size to depend on the radius as best as possible. So you just choose some epsilon i. And so you care about the best, kind of trade off all the dependencies. So this is epsilon. So what you should do is that you should say, I guess, if I give you a zero knowledge truth, I'm going to choose epsilon i to be ci minus 1 square, bi square over kappa, i plus 1, up to Kappa r 1/3 times epsilon, sum of bi 2/3 over kappa i 2/3, product of kappa i 2/3. All right. So if I choose epsilon i to be this, then I will claim that sum of epsilon i, kappa i plus 1, up to kappa r-- this will be indeed equals to epsilon. And why is that? I'm going to do the derivation for you, but I don't feel like you should really need to verify it on the fly, or you don't necessarily have to verify it later. But just for the sake of completeness, let me do the calculation. This will be-- you just plug in epsilon i here. So you get, I think, ci minus 1 2/3, bi 2/3. This come from this two terms. And then there's something about this and this. And also this thing, right? So you can organize those things into-- I guess I'm only-- I guess I'm treating these as a constant for the moment in this derivation. So I got this multiplied by this. We've got 2/3 of i 2/3. By the way, if you don't want to verify this, just maybe bear with me for a second. All right. Epsilon. I guess, one other thing is that ci is also a function of kappa i, because we recall that ci is the norm bound for the layers, which depends on kappa i. One question? Oh, yes. So the i that the sum and the product rule, that's different from the epsilon, the i [INAUDIBLE].. The i here? In the [INAUDIBLE], yeah. This is the same i. Sorry. Yes, you're right. You probably should use a different index just for the sake of-- Yes. I think you might be right. This is probably-- You know what I mean? Ideally, you probably need to use a J just for the sake of completeness. So, yeah, but this one, you average out this part. After doing the sum and the product, the i is gone in the second part. So, anyway, let me do this tedious thing. So recall that ci is the norm bond and ci is defined to be-- I think, ci is defined to be some product of kappa i. And so I I guess, let's put bi in front. And then you've got ci is kappa 1 2/3 up to kappa i minus 1 2/3. This corresponds to ci minus 1. And then you get kappa i plus 1 2/3. That's from here. This is kappa plus 1-- i plus 1. Sorry. And then, you still multiply the same thing here. And then you simplify this to the first sum to-- I guess, you can see that the only missing term is kappa i. So this is equal to bi 2/3 over kappa i 2/3 times product of kappa i 2/3. And from 1 to r. And times this thing. And now, let's deal with this thing. You can see that this one cancels with this one, and this one cancels with this one. So you get really equals to epsilon. And the log-covering size is equals to-- That is equals to the sum of the-- What I'm doing here is equals to this. Basically, the sum of-- OK, let's first write the trivial thing. This is ci minus 1 square, bi square over epsilon i square. And you plug in epsilon i here. So you get this gigantic thing. Maybe let's call this thing z. So you've got 1 over z squared times this minus sums of ci minus 1 square, bi square. You plug in this to the minus 2. So you get ci minus 1 square bi minus 1 square. Minus 2/3. And then kappa i plus 1 up to kappa i 2/3. And there are some cancellations. So this will be equals to-- very sorry. I think I jumpped a step in my notes. So, OK. I need to-- So sorry. I think you plug in the definition of ci minus 1. And you get bi 2/3 over kappa i 2/3, times the product of kappa i 2/3. And now, you use the definition of Z, which is this gigantic constant. And, eventually, I think you get-- let me now do that, carefully. But, eventually, you get this. Or epsilon square. OK. So I guess maybe this is a good demonstration of why I shouldn't do this. After I even I verify this with my notes, which has almost all the steps, is kind of tricky. But anyway, so before we're talking about how to do this better, I guess let's first agree that this is done, right? Because now you see the log-covering size is bounded by this epsilon square, something over epsilon square, and that's what we wanted to have. And then you apply the Rademacher complexity, the tool from covering number, provide the Rademacher complexity, recall that if you have log-covering number, it's r over F square, then this means Rademacher complexity is something like square root of R over n, right? So this is what we discussed last time. And if you apply this small tool, then you get the Rademacher complexity will be this one. Square root of this one and over square of that. And then you are done. OK? So we are done. But I think I want to kind of share how to do this a little more easily without going through all of this pain. This is a small trick. It's purely a mathematical trick. I don't know how many of you know it. Maybe you all know it, or maybe you all don't know it. But, anyway, let's talk about it. So, basically, the question is, you care about-- This is the question. You care about the trade off between these two. And you care about the trade off between these two. So what you could do is that, if you abstractify it's kind of like-- so abstractly speaking, this is about the trade off between something like-- maybe let's use some different numbers. Alpha i square over epsilon i square, versus sum of beta i, epsilon i. Something like this. That's kind of the game you are dealing with. And how do you do the trade off? So what you would do is that you use this so-called Holder inequality. The Holder inequality is that you only have mathematical ways to write it. For example, you can write a in a product with b is less than the p norm of a times the q norm of b, when p and q satisfies this. And for example, when p-- And you could also write it like this. The sum of ai bi is less than the sum of ai to the power p to the 1-- this, something like this. This is just exactly the same thing. And then guys, when p is 2, this is the Cauchy-Schwarz inequality. And we need something slightly different. We need p is 3, or p is 3/2. Then you got some of ai cubed 1/3 times sum of bi 2/3. It's lower than sum of ai bi. So in some sense, all of this inequality is trying to kind of deal with-- has this kind of form. And I guess maybe what should I-- which one should I do first? Look, I'm not sure whether I'm lost on you. So I guess what eventually I want to do is the following. Maybe let me just give you an overview. Eventually I want to do is just that, I want to do this. And let's say that this product is the sum of beta i epsilon i. This is larger than sum of alpha i beta i to the 2/3 and 3, 2. So if rather do this, then you kind of like cancel out. So maybe-- sorry, maybe let me do this first. So we care about sum of i squared-- of epsilon i squared versus sum of beta i epsilon i. And this is your epsilon, and this is your covering size the covering. And what you would want-- what you can do is that you can say this times this squared. Just forget about this. Just let me do it formally. So if there's an inequality that shows that this is larger than sum of alpha i beta i, the 2/3 over 3/2. And this is essentially Holder inequality. Let me justify this in a moment, but suppose you believe in me in this. Then you say that-- and suppose you also believe that this is achievable. Suppose we believe that equality is achievable, which I will justify in a moment. So if equality is achievable, it means that there exists epsilon i's such that sum of alpha i square, epsilon i squared is equal to this quantity 3/2 over sum of theta i epsilon i squared. And recall that this is your epsilon squared, and this is your log covering number. Now you get log covering number-- you can-- so you get that. You can choose epsilon i such that the log covering number is less than this, which is equal to this quantity over epsilon square. And this quantity is what you are looking for. Right, this is the R thing that you are looking for, which is the-- something like this. And you don't have to do any of this verification, right? That's it. And basically you just have to plug in alpha i and beta i, and you just verify. And that's it. So does that make sense? So basically you cancel out the epsilon I's. You try to find the best epsilon i by proving the best inequality. You want that-- you also want inequality to be achievable. So-- so for example, another situation that this is useful is-- for example, you probably have seen this kind of form. Like you have a parameter eta. You have eta plus B over eta. You want something like this. And you can choose your eta arbitrarily-- or arbitrarily. So how do you do it? Many people tell you that you just find out the minimum eta by doing some kind of like taking a gradient, right? So and then you find out minimum, right? That's fine. But-- so but my way to do it is that just to prove that eta plus beta eta-- this is larger than 2 times square B. This is Cauchy-Schwarz, or AM-GM, whatever you call it. And this inequality is achievable. You can attain the equality. So that basically, the best thing for this is to square B basically. Basically if you know what this equality is attainable, then you can choose the existing eta such that eta plus B over eta is 2 square B, and then you get rid of eta. You get the best bond you want. So the same thing-- it's the same logic here as well, where you prove inequality so that you can cancel out the parameter that you want to choose. And then-- and if that inequality can be attained as a equality, then you know that you are getting the best parameter. And you don't even necessarily have to compute what epsilon i are. Of course, if you're writing a paper, probably, you still want to compute epsilon i, and do the zero knowledge proof, right? That's why all the papers show you this kind of things. So because-- so how do I-- I have to do a lot more argument to show that this-- but in your mind, you probably should do this in later version. This later version-- at least this is what I did in my mind when I do any research like this, right? Because this is so fast, so that you can get a better estimate on what's the bound you can have, right? And in some sense, this is useful in many cases because one of the ways to make your theoretical research faster is that you have a lot of modularized small steps which you can do very, very fast, right? So one of the way I found that people can get into this very messy calculation is that-- every theory-- if you prove something hard, right, so you have to use a lot of pages. So your eventual product is something like 20 pages proof, or maybe more than that. Sometimes there are 70 pages proofs, or 100 pages, right? So at least I think-- at least in when I do those kind of like proofs, I never really kind of like-- if I change one part of it, I never have to redo that 100 pages calculation to know what's the final outcome. So basically after a certain point, I already know that this part, maybe these two pages, are the most important thing. And I also know that, how does this page translate to the final outcome? And I've already done those kind of very fast. I have a kind of very fast data structure so that I know that if these two-- if this part can be improved by a factor of 2, then what does that mean for my final outcome? And that part is kind of like, are they abstract enough so that you have this very fast conversion, and then you can iterate very fast. So, and-- so the flip side is, this is opposed to another model, which is that you change your proof in some part, you have to redo all the other parts. And so that would be much slower. So this is about this kind of tricks that I realized. So if you can do something small, like these kind of abstract things very fast, then you can iterate faster in your research. Anyway. So far, it makes sense? And if you really care about why this inequality is true-- why this inequality is true-- I think I was trying to justify why it's true, because you can just use Holder inequality. I guess, if you apply the Holder inequality, you get something like this. And this is still not exactly like this, I think. So actually, this is exactly like this, right? Because you have to choose-- you can choose your ai to be-- you just say ai cubed maps to this, and bi maps to this. If you just want to verify, right? You just have to change it, right? So-- but again, if you have to verify this by matching, it is still too slow to me. So what I do is I also memorize other different versions of this whole inequality so that I can do it faster. I think the version I memorized in my mind is that-- at least one version of the Holder inequality in my mind that I memorize is this, which is something like sum of ui square becomes 1/3 times the sum of vi 2/3. It's larger than sum ui vi 2/3. Something like this, which is even closer to here, right? Because in some sense-- and sometimes the way you memorize it is that if you have a bigger component, exponent here too, then these two will go to the vi. How do I say this? So basically you put the-- so like-- so why this is ui to the power of 3, right? So why this is ui to the 2/3? This is because here is you have a square, and then you have a 1/3 outside. And the reason why here you have vi to the power of 2/3 is because you inside you have vi, the linear term, on the outside you have the 2/3. So if you know that, and you know that if you have a square here, then you can cancel this epsilon i because epsilon i will be squared and here you have epsilon i squared, so they can cancel each other. I'm not sure why this make any sense. It takes probably some practice. If you see this enough times, you know what kind of inequalities you can put. Anyway, I guess I probably should wrap up this discussion. Any questions? OK. I think-- let's see. 10 minutes. OK, so I guess I'll use the next 10 minutes to motivate what I'm going to discuss next. Go ahead. What part is this from? This inequality? Yes. Because the inequality can be achieved. So that's why you know it's the best choice. Oh, you mean the final box? Yeah. OK, yeah. Maybe let me discuss that. I'll answer that in the next 10 minutes, yeah. Right. OK, so-- so basically, now-- next, we're going to do something more better than this. So-- and actually, it turns out the proof is actually cleaner, to some extent. Because-- in some sense, because it's capturing the right quantity. So-- so next we're going to have generalization bounds that depend on the actual Lipschitzness. And I'm going to argue that the Lipschitzness we had before was only upper bound, right? Before we had this-- right, before we have this, right? So before we have these bounds, but there you have essentially a dominating term times other terms, which is just a polynomial in the norm, which is not very important. And this one, this is only an upper bound on the Lipschitzness. All right, and it's a pretty worst case upper bound, because you have to-- if you really want your network to be-- to achieve this Lipschitzness, you have to actually kind of construct something that is kind of like somewhat kind of like special. And even this is achievable, right? This worst case upper bound can be achievable in certain cases. Still, you want to find out the network which is probably better than this, empirically. So that's why we are going to have-- so basically, kind of the high level goal is they want to replace this product of the spectral norm by something that is more accurate. And there are many-- several motivations to do this. So I guess one thing is that-- and this relates to the limitation of this bound. So one thing is that this wi operator norm has to be larger than 1, or even you can arguably say, this is even larger than square root 2, to make sure fx is not too small, right? Why this is the case? This is because if you look at every layer, let's say hi is the i-th layer. hi is the i-th layer. Then hi plus 1, the true norm of it, is the true norm of this. You apply this last layer. And if you do a heuristic, you say that suppose you believe that this activation, this ReLU activation, kills half of the coordinates, all right? So it is 0 on top of the coordinates. Suppose you have that kind of like-- Then it means that after a ReLU, your norm will reduce by 1 over square root of 2, because you kill half of the coordinates. So of course, this is very heuristic. This is just a belief, like assumption. And suppose this is the case, and then you say that this is less than square root of 2 times the opposite norm of wi times hi 2 norm. So then you can see that each time, you can only grow your norm of hi by this factor. So if wi operator norm is less than square root of 2, then you are shrinking a norm over layers. So your norm, every layer will become smaller and smaller and eventually they will converge to 0. So your output will be very small. So that's why you have to make sure that the output norm of wi to be somewhat big. It cannot be like too small. In the most optimist case, I think you want the optimum to be larger than 1. But in a kind of more typical case, you need it even to be larger than square root of 2. So you are in the case, right? So this means that-- so in some sense, this means that the product will be big. Good. So-- and another thing is that the motivation too, I think, this is something I mentioned, right? So this is only a worst case upper bound. It's very worst case of the Lipschitzness. And in practice, so the Lipschitzness on the data points-- the Lipschitzness on the data point. x1 and up to xn might be better. All that Lipschitzness on the population distribution, or maybe on the data points, or maybe on the-- or an X from P, from the population distribution, could be better. So at this point, doesn't capture that. And another thing is that, it turns out that we're discussing this in the later lectures. So it turns out that SGD prefers flat local mean. This is something we widely believed, and in certain cases, we can prove this. And the flat local mean is, roughly speaking, we will show this-- we will justify this in later lectures. But roughly speaking, this is the Lipschitzness of the models. All the empirical data. So you can see that this is not a Lipschitzness. The worst case of Lipschitzness on all the points is the Lipschitzness on the empirical data. So which further justifies I probably want to have a bound that depends on the Lipschitzness on the empirical data, but not the Lipschitzness in the worst case. So-- and in some sense-- and also another thing is that-- another remark is that, it's OK to have a generalization bound that depends on the empirical data. So, OK to make the generalization bound depend on empirical data x1 up to xn. And sometimes, this is actually nice, because suppose the generalization is less than some function of the classifier and x1 up to xn. This is still useful because you can still use this-- use it as an expensive regularizer. So there's no problem that your generalization bound-- there is no problem for our generalization bound to depend on empirical data. You probably don't want a generalization bound to depend on the population data because you don't know how to recognize it anymore. But if it depends on empirical data, it's fine. So basically, concretely, in the next lecture, I guess, we will prove that-- next lecture, we'll prove that the generalization error, or the test error of theta, is less than some function of the Lipschitzness of f theta of x1 up to xn, and then the norms of theta. And its function is a polynomial function, which doesn't have anything exponential in it. OK, I guess I'll stop here. Any questions? And interestingly, the proof for the next lecture is actually easier than today, I hope. I don't know how you think about the proof today. It's pretty brute force. So in that sense, it's actually not very hard. But it's pretty messy. I guess I will see you next week. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_7_Challenges_in_DL_theory_generalization_bounds_for_neural_nets.txt | OK, I guess let's get started. So in this lecture, what we're going to do is that at the beginning we're going to talk about deep learning, especially some of the challenges in deep learning theory. And then in the next probably 5 to 10 lectures, we are going to discuss different aspects about deep learning, I guess. You will see like we're going to talk optimization, [INAUDIBLE] so on, so forth. So basically, in deep learning theory, there are different aspects, for example, optimization, which we spend probably two lectures on later. And generalization is another question which we probably will talk about for probably more than three lectures. And at end of the course, we are going to talk about some other slightly different topics. So in some sense, you can view this as kind of like an outline for the next five weeks. So to talk about deep learning theory, I think it's probably useful to somewhat kind of summarize the classical machine learning theory, which I actually didn't really talk about that much from a bird's eye view that much in the beginning of the course because I felt that if you give too much information at the beginning, it's probably a little bit too much. So but now I'm going to have a higher level view about what classical machine learning theory do in terms of different kind of aspects or different topics. So I guess in the more classical machine learning theory, there are several things. So one thing is called approximation theory. So in some sense, this-- and another keyword is called expressivity or representational power. If you see these kind of things, you know that's representational power. So you know that they are all about the same thing. So what they are doing is really caring about, basically, you want to bound L theta star, which is the best model in your family. So so far, until this week, we always talk about excess risk. We compare it with the best model in the class, and we say that if you can get the best model in the class, then you are done. But actually it's not done, because maybe you are using the wrong hypothesis class. So your best hypothesis class in the family hypothesis-- the best hypothesis in the hypothesis class is probably not great, right? So approximation theory is basically trying to deal with this, right? You are trying to understand whether your hypothesis class is powerful enough to express the functions you care about. So for example, a kind of trivial case, for example, suppose you have some data like this and something like this-- some positive data, some negative data. Here, you know that if you use linear model, then the best linear model is not going to do great, right? Because if you probably find the best linear model, you probably would do something like this. I don't know. So in this case, you can say that L theta star wouldn't be great if you choose your capital theta to be linear family. And then you can study what hypothesis class can contain a good classifier even you have access to population data, so on, so forth, right? So in some sense, this is trying to understand how good can a hypothesis class H approximate the ground truth label function, right? So that's one type of question. And another type of question is what we discussed already, which is about the statistical aspect of sometimes people call it generalization theory. So this is about the excess risk, as we discussed in the last several weeks. So you are trying to bound from above the difference between your learned hypothesis from the best hypothesis, theta star, right? And what we have done was something like you bound this by L theta hat minus L hat theta hat plus L theta star minus L hat theta star. And people have called this the generalization error. The generalization error is the difference between the population loss and empirical loss on the learned type parameter, right? So this is the generalization loss basically the difference between training loss and test loss, right, on the learned parameters theta hat. If, say, the hat is ERM, then this is talking about ERM, but maybe in other cases you are using some other algorithm to find theta hat that you want the generalization error for that theta hat. And this term, as we argued, the second term is always small. Just no matter what hypothesis class you use, basically, as long as your loss function is bounded and this term is always something like 1 over square root of n. So basically, that's why we don't care about this term that much. OK, so what we have done was something like you prove this kind of generalization bound. So you prove something like L theta hat minus L hat theta hat. We bound it by something like some complexity over square root of n. I guess typically, probably you should write this. And the principle here is that if your hypothesis class is of low complexity, then you have better generalization error, right? So simple hypothesis can generalize better. So I think sometimes also people call this Occam's razor. This is, I think, is kind of like philosophical principle which dates back to something like 1100 or around that time, and the principle is something like simple or parsimonious explanation can generalize better to other situations. And you can see even from these two things, right, you can see that there is some kind of conflict or trade-off between the approximation theory and the generalization theory. Because if you use a very, very simple hypothesis class, then your L theta star may not be good enough. For example, for the beta I drew here, if you use linear model, then your L theta star is not great. But your generalization error could be very good because your model is linear and simple. So there is some trade-off between-- and I think people also sometimes called this bias and variance. So the variance mostly corresponds to the generalization theory. It corresponds to statistical error introduced from learning because you have finite data. That's why you have to pay something that depends on how many examples you have. That's the variance, and the bias mostly is a quantity that only depends-- bias, all the expressivity, is a quantity that depends on the fundamental power of the hypothesis class. It's not something that depends on how many examples you have, right? But the variance bias trade-off is essentially the same thing here, but the exact definition of bias and variance can only apply to basically square loss and linear model. That's why we don't use the explicit here. But the principles are somewhat related. And you can also kind of extend this generalization theory a little bit by saying that you can consider the regularized loss. In some sense, you can consider this as application, implication of the transition theory, which says that if you use regularized loss, right, something like L hat reg is something like L hat theta plus lambda R theta, where this is a regularizer that captures the complexity of the hypothesis. So then, you can hope to have a claim like this so you can have a statistical claim of the following form. Of course, this depends on exactly which regularizer you use, what models, so on and so forth, but the form of the claim is something like if theta lambda hat is the global minimizer of L hat reg, then you have a generalization bound. You can bound excess risk, or you can bound, I guess, either the excess risk or the generalization error. I guess they are pretty much related, as we have discussed, right? So all the generalization error, so they are bounded by something. So this is the type of results you probably get from this kind of statistical generalization theory. Because you know that if you-- the reason is that if you optimize this regularized loss, and you, indeed, find a very small regularized loss, that means that your regularizer-- the R theta, the complexity-- is small, and also it means that your training area is small. And then if both of these are small, then you can show that your excess risk is small because this model will generalize to the population of the test case. And then there's a third aspect, which is called optimization. Any questions so far? Right, so there's a third aspect which is called optimization. So the question is about numerically how to find theta hat. Theta hat could be the arg min of the training loss, or maybe you can talk about theta hat lambda, the regularized loss from [INAUDIBLE] right? And this is a purely-- at least in a classical way of thinking about this, you can basically view this as a separate question about-- you can forget about where your data come from. You can forget about why you care about minimizing this training loss. You just say that I'm getting this training loss. That's my job, right? And typically, the approach is something like if the loss function is convex, you use convex optimization. And in all, maybe you can use gradient descent for non-convex functions, so on, so forth. Or maybe stochastic gradient descent. There are many different approaches. And when you measure the success, or you measure the interface is that you care about how well you can approximate the minimizer. Or you can never find exact minimizer using a numerical approach, right? So you always have some small error compared to the minimizer of the empirical loss, and you can measure the error in different ways, maybe match the error in terms of the sub-optimality in terms of how different your minimizer is in terms of the loss function compared to the best minimizer. Or you can compare other quantities. So in some sense, I think from this kind of summary here, you can think of the statistical part is kind of pretty much independent from the optimization part. Of course, there are also interesting interface. For example, you can also ask about what regularizer-- so when you add regularizer, right, so you can ask the question, what regularizer can simultaneously have good statistical performance, but also can be easy to optimize, right? And by easy to optimize, it means that you can optimize it fast, or maybe optimize it in a certain time, maybe d time or d squared time, so on so forth. So there are still interactions between different parts, but if you just need a kind of high level kind of understanding, you can think of them as separate parts, right? The interaction are more on the lower level details about how do you achieve the best statistical efficiency, or how do you achieve the best computational and statistical efficiency? Then you have to talk about the interactions. But at a high level, you don't have to think about them simultaneously. You can think of them roughly separately. Is there a question? Yeah, [INAUDIBLE] Sorry-- no, no. This is another visual, sorry. My bad. This is just two things. My writing is bad. And these two qualities are basically similar, right? So like you care about the excess risk, which is the most important thing, but which is almost the same as the generalization error. And actually, you bound the generalization error. Then you bound excess risk. Sorry, my writing is not clear. So any questions so far? So these are the standard way of thinking about these questions, but what happens in deep learning? What happens in deep learning is that, as you'll see, things becomes more complicated, and for a fundamental reason. And I think the first thing is that for deep learning, there are probably two things that change, at least on the surface. So one thing that changes is that you have from linear model, it becomes nonlinear model, right? And this directly affects the optimization because when you have nonlinear model, it becomes non-convex loss. But this wouldn't change the structure view fundamentally because it makes the optimization question harder, right? So at least at the beginning, this is what I thought five years ago, maybe more than five years ago. When I started to do deep learning theory right after deep learning took off, right, at very, very first I thought that the only difference is that now the optimization question becomes harder. And then the question is just how do you optimize better? But then, I think in probably like about three or four years ago, people realized that there is also another fundamental difference from the statistical perspective, which is that empirically, you always use this so-called overparameterized model. Maybe it's not precisely to say that you always use overparameterized model, but generally, overparameterized models are better than-- more parameters are always generally better, or almost always better. So more parameters generally helps. And it can help even to the extent that when your parameters are more than the number of data points, right? So it even helps when d is larger than n, this still helps. And it even helps when you have already zero training error. So even after you already have zero training error. So this is a plot that I got from some paper. This is from a paper by Neyshabur, Tomioka, and Srebro in 2015. So this is what they've found. Of course, this is only a very small data set, but roughly speaking, the same phenomenon also holds for a larger data set. And you can see here that the black curve is the training error, and the x-axis is how many hidden units, or how large network is. Hidden units means the number of neurons in your network. Which if you have more hidden neurons, you have more parameters. And actually, the number of parameters is quadratic in the hidden neurons in this fully connected case. This is a very simple fully connected network, MNIST, and you can see that after you have more than 64 hidden neurons, you can fit MNIST perfectly. 0% error-- I think literally zero. Maybe not exactly literally, maybe 0.01% error or something like that. And if you look at a typical textbook, right, so what you would do is that you would predict that the test error will go up after a certain point because you are overfitting. You are using too complex of a model, and you are overfitting to the data. That's the purple thing which you would probably read from some of the classical textbooks. And actually, it does happen in some classical settings, but does not happen often in neural networks, or probably never happens in neural networks. And what really happens is the right one. The generalization error actually continue to improve as you have more and more neurons even though you already memorized everything, right? So if you compare 64 with this 4k, basically these are just two networks. Both of them fit the training data with 100% accuracy, but one of them has better test accuracy than the other. So this is kind of a big mystery from a theoretical point of view, especially if you believe in the classical trade-off between bias and variance or the trade-off between expressivity and generalization theory-- generalization power. So this is the big open question, right? And briefly, let me discuss again what's the impact on each of these concepts? And actually, you even have to really think about some of these concepts. Like some of these concepts become entangled or intertwined now in deep learning. So first of all, for approximation theory, I think things don't change that much at least compared to other parts. So for approximation theory, I think generally, I guess, you know that large models are expressive. And there's actually something called universal approximation theorem. I'm not sure if you heard of it or not. In some sense, this is saying that if you have a network that is wide enough, then you can approximate any functions. Of course, that's, in some sense, a misleading way to say it because what does it mean by large enough, right? So if you need exponential number of neurons, that's, indeed, very large. That's large enough, but that's not really implementable. So empirically, you don't even need that many neurons to be expressive. I think you just need polynomial number of neurons. But anyway, so the gist is we do believe, regardless whether this universal approximation theory is exactly answering the question, at least we believe that the neural networks are very powerful. So we generally believe that the best model in this family, especially if you use a wide enough network, this is generally small. This is what we generally believe. And at least what you can show is that you can say this is really small, the minimizer of the training loss. Because if you have a neural network with more than n neurons, so this is just because with more than n neurons, n is the number of examples, you can provably memorize all the training examples. At least you can find one network that memorize all the training examples. That network may not generalize, but this already means that your minimal training loss is very small. It's probably zero. OK, so basically, for approximation theory, I think we generally believe that the models are very expressive. And then that becomes the generalization part, which becomes quite complicated. So there's another information about what practical network does is that in practice, also people don't use very strong regularization. Only weak regularizations are used. And this is kind of like a somewhat important thing to say just because recall that even in a classical setting, right, it's not always that you can show-- sometimes you can have a-- so even in a classical setting, you can have the setting there where you have a lot of parameters, but you have a strong regularization to compensate. So that's allowed in a classical setting, right? For example, if you use a sparse linear regression where you have a lot of features, the dimensionality is very high, but you regularize the sparsity of your linear model. Then you-- wait, I think speaking of sparse linear model, I think I forgot to do something that we left last time about the comparison between linear models. But anyway, my bad. I think I should have it. But anyway, let's continue with this. But anyway, so what I was saying is that even in the classical case, you do allow to have d bigger than n. The dimension can be bigger than n as long as you use the regularization, right? Because if you use the regularization, you implicitly restrict the complexity. For example, if you say that the sparsity of your model is s, and s is less than n, then that's OK. So however, in deep learning, in practice we only use very weak regularization, right? So typically just some L2. At least even with L2 you can work. Sometimes even without L2, you can work pretty well. And also, the regularization strength is also relatively small. The strength is small enough so that you can still fit your training data with basically 100% accuracy. And another way to see the weakness of the regularization is that you can consider the following fact. So this regularized loss, if you, for example, just regularize with something like L2 with some lambda, the regularized loss doesn't have unique global minimizer. Or at least, it has very different approximate global minimizer, right? Maybe if you really care about the numerical precision, if you say like you care about very, very small precision, then maybe there's a unique global minimizer. But for practical purposes, there are many different global minimizers that are very similar in terms of the training accuracy and in terms of the regularized loss. And they all have very small regularized loss. They have very small loss from the regularizer part. They also have very small loss from the training error part, and they are different global minimizers. And another thing is that it's also not true that all of these global minimizers perform the same. So these global minimizers perform the same on the test. I guess probably it's easier to just have a figure here. I think I did prepare a figure. So let's see. I think this is experiment I've done a few years back. There are many different kind of plots you can find online on different papers like this. This is just one of them. I actually take a little bit to exacerbate the differences a little bit, but the gist is always is the same. So this is what? This is CFAR-10, and you have two algorithm, the red one or the blue one. And I'm plotting the training and the test. And these two algorithms only differ by the learning rate. They have the same training objective. They have the same regularization strengths. It's just that the optimizer are different. So at the end of day, you see that both of these two algorithms found some global minimizer, or approximate global minimizer. You can see the training error is close to 0 in both of the two cases, right? So both of these are global min in some sense, or at least an approximate global min up to a very good approximation. But you can see that their test error are very different. So that means that these are two different global min for sure, right, in the parameter space. And also, they perform very differently on a test. So that's kind of the mystery, right, because this kind of refutes the possibility to have a theorem like in the classical case, right? So recall that in a classical case, typically you have theorem like this where I'm seeing-- yeah, so you have theorems like this, saying something like, if you find the global minimizer, or a global minimizer, or any global minimizer of the regularized loss, then you can generalize. You can bound the generalization error. And this is no longer the case because not all the global minimizers are the same. Some of them are better. Some of them are worse, and you probably shouldn't have the same bound for all of them. And some of them probably just don't generalize at all, right? So this is saying that you cannot just say any global minimizer generalized. You have to somehow distinguish different global minimizers found by different algorithm. But what happens here, right? So what happens is that the optimizations start to come into play. And this is the reason. So basically, as I alluded to in some sense, different optimizers found different global minima. And some of them are better, and some of them are worse. So that is saying that optimization is not only-- so optimization is not only about finding any minimizers, any global min. If you just say you find a global min, that's not enough. You have to use optimization to find the right global min. So in some sense, the optimizations have two jobs, uh-huh. One thing is they have to find something that has smaller error, or a small error or small regularized loss, and the other job is that it also has to find something to generalize. It has to find a global minimum that can generalize. So in some sense, the kind of the picture is like this in my mind. So you have this-- I'm using one dimensional thing, right? This dimension is the parameter. And basically, I'm envisioning this kind of toy case where you have the landscape of the training loss and test loss look like this, right? So the training loss has two global minimum. And one of them is a good global minimum, and the other one is a bad one. And a bad one in the sense that the corresponding test error is bad. And the optimization algorithm is not only responsible for finding an arbitrary global min, it's also-- it actually has to find the right global min instead of the bad global min. So somehow, the optimization algorithm is doing something beyond what it's supposed to do, right? So I guess in some sense, this is a one dimensional case. If you think about the high dimensional case, this is something I often use in my slides, it's kind of like you are going to a ski resort. And the first time I came to America, I didn't realize that you can have multiple valets, or multiple parking lots in the same ski resort. So when I go back home, I do gradient descent, right? I just go to an arbitrary valet and I found that my car was not there. And then it's actually a trouble because the resort is closed, and it cannot lift you up. And so it's actually pretty annoying. And then I realized that actually there are much more global minimum, and one of them is better than the others. And you have to find it, so it's not like arbitrary within a set. Or maybe the grid is doing something more than just the arbitrary downhill skiing, right? [INAUDIBLE] is it the fact that [INAUDIBLE] Right. So why the generalization? So the question is exactly mathematically where the generalization theory breaks down. I think the bounds becomes backwards-- basically, the bounds you can prove becomes backwards. The bound you can prove under the existing language becomes backwards. So basically, if you say that you want to prove a bound that works for all, you might work of size 10 million, of size 100 million for only 1 million examples. So if that's the language you are using, then it wouldn't work anymore. So you have to have a more precise way to think about it. Does that answer the question to some extent? [INAUDIBLE] what if you incorporated the fact that [INAUDIBLE] Right. Roughly speaking, that's the approach we're going to take. But there's one problem with this. If you just do it exactly what you said, there's a problem, which is you're going to get the same bound for any algorithm, right? But empirically, different algorithms have different performance. And the way to fix it becomes that you first say that different algorithm find models with different complexity, and then you can have different bounds for them. So the algorithm has to come into play in some way, right? So basically, that's kind of the conclusion here. So the algorithm has to come into play in your statistical analysis, right? Because if you don't have the algorithm there, you are not going to distinguish these different algorithms. So in some sense, you entangle the statistics with optimization to some extent. And so basically, the way to fix it is, at least the current plan, the general agenda I think most of the researchers seems to agree on is that you analyze the optimization and analyze why the optimizer finds a good local minimum. So basically, you need to have a theory that says something like, the optimizer find a theta hat such that-- so one, this theta hat is a global min that's approximate global min of the empirical risk, and also two, theta hat has some special property that you didn't explicitly say that they should do. For example, the property could be low complexity. So maybe, for example, just to give you an extreme case, for example, you run an algorithm without any regularization. But then logically, you say that even though I didn't let it regularize, but actually, the theta hat I found has low L2 norm, or has even the minimum L2 norm. Actually, you can prove these kind of theorems in certain cases. And then, because of the special property thing, and this implies that it can generalize, right? And the people have kind of proved theorems of this kind of form in many different case. So for example, you can talk about SUD, right? So SUD probably has some special preferences in terms of what models they want to find, and maybe SUD with different kind of specifications, right? So you can have large arrays, small batch, and so forth. I'll talk about that in a moment. But generally, we want to say that the practical optimizer people are using can have some preferences on certain types of global minimizer. And then after you have this-- as I said, after you have the special preference, then you can use the-- so this part from the special property, the low complexity through generalization, this could be more classical. This could be classical theory, or maybe improvement of classical theory depending on what complex measure you are talking about, as you suggested. So that's the current kind of statistical of extending deep learning theory. Of course, there are other kind of approaches, but I think this is pretty much, I think, kind of like the high level-- people have almost reached a consensus on the high level approach here, I think. And what are the best results? Let me have a brief summary of what are the best results people know, roughly speaking, in each of this aspect. So basically, first of all, for-- so let me just make this a bit more formal. So I guess a little more formally. So basically, you have probably three tasks in my language. So first, you prove that-- I guess I'm repeating myself a little bit in some sense. So you prove that the optimizer converges to approximate local or global min of L hat theta. And then in the second task, you also have to prove that in addition to one, the theta hat also has low complexity. For example, something R theta hat is less than C for some complex dimension R. And this R depends on the algorithm, depends on even the details in the algorithm like learning rate, batch size, so and so forth. And then task three, you say that for every theta such that R theta is less than C, and maybe L hat theta is close to 0-- so for every theta with low complexity and small training error, we have the test error L theta is also small. So that's kind of like the general idea. And what people have done in this kind of area, so regarding the task one, which is the optimization question, task one is optimization. So I think maybe if you want to associate some keyword to this, people would call the first question optimization, and the second question people often call it as implicit regularization, in fact. Yeah, probably I should explain this because this is implicit because you never told the algorithm to minimize this complexity. It's implicit in the optimization procedure. And it's a regularization effect because you get some low complexity solution. And the third one, this is probably more or less the classical optimization bound. And for task one, I think what happens is that if you don't have regularization, so I guess-- sorry, so for task one, I think for the optimization question, so one of research, consider the case where you don't have overparameterization. This is overparameterization. Without overparameterization in some special case, you can still prove this in some special case. For example, matrix factorization problem, maybe linearized network, or maybe something like task optimization, you can show that gradient descent or SGD can converge to global min. So here, linearized network means that you don't have any activations. Basically, optimization's linear, so you just stack a bunch of linear models, which doesn't really have any-- doesn't really do anything from a statistical point of view. It's just purely for-- you only analyze that as an exercise for your technique in substance. But you can still publish papers in it just because everything about optimization is very complicated. Even analyzing linearized network is difficult. So one of the thing that people have done. But you can see that this doesn't really address all the issues, right? Because you don't allow overparameterization, and it only works for linearized network or matrix factorization problem, which is completion, so and so forth. And recently, in the last three or four years, I think, you can also do this optimization question for neural networks-- for any neural networks-- for almost any neural networks, deep, shallow, so on, so forth, but with the caveat for special hyperparameters. So special hyperparameters means something like maybe-- so first of all, you need overparameterization. That's actually probably good because anyway, empirically people use overparameterization. But the limitation is that you also need special learning rate or special initializations and learning rate, so on, so forth. And that becomes a problem. By the way, this is typically called NTK approach, neural tangent kernel, which I'm going to talk more in the future lectures and explain why this is called neural tangent kernel. So this is the so-called NTK approach. And the problem with this approach is that this special initialization is a problem, and also special learning rate or special algorithm. So you also have something maybe. So you need something about batch size. For example, in most of the paper, the batch has to be very big. You can only analyze gradient design. You cannot have stochastic gradient descent. So this is kind of like the restriction on the hyperparameters. At the beginning we thought, OK, that's not a big problem. We have these hyperparameters, and then next day we probably extend them to other hyperparameters. But it turns out that there is some serious limitation in the hyperparameters. Because as I motivate before, even you change the learning rate schedule-- in the figure we found, right, so in this one, this is a real experiment. Even if you change the learning schedule, you change the performance of your model. So if you analyze the special learning rate schedule and you analyze especially in translation, then maybe you are not actually analyzing anything impressive. So for example, in this NTK case, I think what we can analyze, the algorithm you can analyze wouldn't give you the best performance that deep learning offers. You probably get something like 80% on CIFAR, but the best algorithm probably get like 95%. Of course, there are improvements along this line, but generally the issue is that you make this hyperparameter so special so that you lose the correct implicit regularization effect of the optimizers. And you are analyzing an optimizer that doesn't have the correct implicit regularization effect so that they don't generalize as well as the real deep learning algorithms. But still, I'm going to talk about this because this is a very nice idea and in certain cases is pretty useful. And then for the implicit regularization question, right, so the question about why the optimizer prefers certain kind of low complexity model, people have had a lot of results on special cases. So special models-- and actually, maybe I should call simplified models-- I don't know why, somebody took my yoga mat, yoga brick for some reason, and I have to use the book. Anyway. So special or simplified models, and also special optimizers. But here, the special is especially in the right way. So you're analyzing the effect of the optimizer. So you focus on each aspect, each paper in some sense. So what are the models that people have analyzed? For example, linear regression, this is something you can say-- and here you can say that certain initialization prefers certain kind of models. And you can also talk about logistic linear regression, and here, we will see that you can prove something like even the model just wants to find the minimum-- even the model just tries to minimize the logistic loss actually tries to find the max margin solution. And also for matrix sensing or matrix factorization problems in a linear neural network. So you can talk about this, and also, there are special aspects of the optimizers. And sometimes there has to be a combination of the problem and optimizer, because certain optimizers wouldn't have implicit regularization for certain problems. So you can talk about the GD, you can talk about SGD, and SGD, I think there is actually also about the noise covariance. Like what covariance will give you the right implicit regularization, also the noise scale, which also matters. And you can also talk about approximate dropout. This is something you do in your optimizer which will change the implicit bias, and you can also talk about learning rate, which is also actually important, and batch size, so on so forth. And also there are unsolved open questions for tempo momentum you know like factorization. All of this has some implicit regularization effect. So that's why this becomes complicated, right? So everything you do in your optimizer, everything you change, would possibly have implicit regularization effect. Sometimes it's positive. Sometimes it's negative. Of course, most of the tricks that we have seen have positive effect because that's why they survive and they are published, right? So that's the statistical I guess. And I'm also going to try to mention a more general result that we have that me and some collaborators have done. So you can also try to have a more general result which says something like SGD on L hat theta is roughly equivalent to doing gradient descent on L hat theta plus L lambda r theta for some R, for some regularizer R. This is a result that we can show. This is a much simplified-- the high level idea of a result that we can show. But of course, there are limitations. So these kind of more general results have weakness in other aspects. For example, you may have additional assumptions, or you can only deal with certain stochasticity, so on, so forth. But I think from this result you can see that this is kind of the things that we are trying to do. So if you add stochasticity, then you automatically, implicitly you got a regularizer for free. Even though you are using stochastic gradient descent on the original training loss, but somehow you get a optimizer for free somewhere. OK, so I think basically, we are going to talk about many of this in the next few lectures, in the future lectures. And for the task 3, for the generalization bound, this is also an interesting open question for deep learning. Because you also want to have precise transition bonds that can be compatible with the regularizer you got from the previous part, right? So we have said that the optimizer has a preference, but does that preference leads to a better generalization? That's another open question, right? So for example, you can have-- so one of the paper in 2017 proved that for this, if you use this as the complexity measure where AI is the weight of ith layer, so if you use this, then you can guarantee your generalization bound. That's one of the early results along this line. But the problem with this is that this is not precise enough, right? This is still too big to be modeling in some sense. So you sometimes need more precise optimizers. For example, if you can guarantee that-- I would talk about the limitations probably when I really talk about this, but this is still not precise enough. And you sometimes need more fine-grained complexity measure that is more compatible, more fine-grained. And also, ideally you want something that is a result of the optimizer, right? So you want this regularizer here to be the same regularizer as what you had in the implicit recognition effect part. So that's the third part. Yeah, I think that's basically a high level overview of some of the lectures we're going to talk about-- some of the lectures in the next few weeks. And of course, there are other open questions in deep learning, as well. For example, what's the role of the parameterization? So in these tasks, I didn't mention any of those, so on, so forth. But for those kind of things, I don't think there's a systematic study yet, so that's why we don't talk about them much for now. And I think for the immediate plan, I'm going to talk about task three here first because we are in this mode of proving generalization bound. We have talked about Rademacher complexity, and all of this depends on the Rademacher complexity. And I'm going to talk about that first, and then I'm going to move on to the other parts. Any questions so far? [INAUDIBLE] Sorry, I didn't hear the question. [INAUDIBLE] Yeah, I got the question. So the question is whether any of these results or tasks depends on the data distribution? Yes, they all depend on data distribution, I think. So all of them assume some of data distribution underlying. So some of them require something stronger, some of them just require some regularized connection, but I don't think you can go away without any data distribution assumption. And some of them have very strong data distribution assumptions, to be fair. And that's actually, in some sense, in my opinion, that's one of the technical challenge here. It's kind of like a subtle balance. If you assume too much about data, then you lose the realisticness. But if you assume something too strong, then-- sorry, but if you assume too less about data, then you have some hardness results. So certainly without any data assumption, you probably shouldn't be able to prove almost any results here just because things become simply hard, especially if you talk about computational procedure, it's very easy to get into NP hard instance. So we need some data distribution assumption. And another even more complex question is that, how do you leverage data distribution assumption? Like we don't have a lot of tools. So for example, if you assume it's Gaussian, then what you know? You know something about what's the moment, so on, so forth, right? You can do some certain kind of derivations. But I don't feel like we used even the property of a Gaussian enough in some sense. And let alone other kind of data distribution assumption, we don't have a lot of good tools to use them. Cool. So if there's no any other questions, I'm going to move on to the generalization bond for neural networks. And you can see that this is still roughly in the kind of mindset of the classical setting. The only difference is that we are looking for proper complex measures, not only a dimension dependency, but something sometimes more complicated. And you will see that this part is really a direct extension of what we have done in the last three weeks, because the tools are shared and it's really just that you need better tools. All right, so now let's talk about the particular setup that we can do. So we're going to start with two layers-- two neural networks. And then in the next few lectures, we're going to move on to multiple layers. And for two layers let's use the following notation. So let's say your parameter theta contains of two parts. One part is w, and the other part is u. So w is the second layer, and u is the first layer. So basically, on the network of theta x will be something like w transpose phi of ux where u is a matrix that maps dimension d to dimension m where m is the number of neurons. So basically, ux will be m dimensional. So this will be m dimensional, and you apply an element wise ReLU function. So phi is element wise ReLU function. So phi of a vector z1 up to zn is equal to, basically you apply the elementwise. You get max z1 0 to max zm0. So after you apply phi, you get an additional vector. and you inner product with w, you get a single scalar. So we have a model that outputs a single scalar using these two layers, u and w. And again, we still call xi yi, the training data set, as usual. OK. So our goal is that first to show a Rademacher complexity bound, and then we also talk about how this RC bound is relevant to practice. And I think for the day we probably wouldn't even be able to finish number one because I'm going to have actually two bounds. One is better than the other. So here is a theorem for a Rademacher complexity bound. So the theorem is that-- so suppose you have a hypothesis class that consists of models look like this, parameterized by theta, where you require that number of w is less than Bw and norm of ui is bound by Bu. I guess I didn't define ui, so this is-- let me say this. So u is this matrix of m by d matrix, and let's say each of the rows is u1 transpose up to um transpose. So each ui is of dimension d, and so that's why u times x is really in the product of this ui with x. That's the notation I'm going to use. OK? So basically, ui's are rows of the weight matrix. So we restricted the w, and the norm of w, and norm of ui to something like Bw and Bu, and then we also assume something about-- the data has expected 2 norm square less than C. I guess actually this is probably C square. Have a typo here. And then under all of these assumptions, you can prove the Rademacher complexity bound Rn of H is less than 2 times Bw Bu times C times square root m over square root n. So I guess just a remark is that this is not ideal bound, not a good bound, because m shows up in the bound. And actually, it shows up in a wrong way because it says that if you have more neurons you have a worse bound. So the m shows up in the more classical kind of sense where you have more neurons, you have more complex models, then it's not great. So basically, you cannot use this theorem to explain the size of deep learning or the overparameterized model because this is saying overparameterized model we'll have bigger Rademacher complexity. But you want a bound that is better when m goes to infinity in some sense to explain the plot that I kind of showed here. So as m goes to infinity, you want a better and better bound, in some sense. But this one gives you a worse and worse bound. But it nevertheless lets you prove this because this is kind of like a warm up for what we'll show next. [INAUDIBLE] I see. So maybe let me rephrase the question first, make sure. So the question, if my understanding is correct, your question is that why you're expecting this right one is going to 0 is decreasing forever, right, instead of really going up after a certain point, right? It's just we don't have enough data points, right? Like we didn't run that very super large scale experiment. I think the answer is that we do think this is already large enough for us to kind of believe that it will never go up like this because 4k neurons for this task is really, really a lot. Like 64 already allowed you to memorize. Typically, you wouldn't even run so many. You probably just-- maybe it would be easier to convince you if I show you 4 up to 108. And you will see something like this, and then you ask me the question. I will show you 108 up to 4k. You probably would be more convinced. Yeah, but 4k is already pretty large, I think. But of course, you can never rule out the possibility that after maybe a million neurons it goes up. It just sounds unlikely. [INAUDIBLE] So I guess I think the intention of the question was that whether this bound really is growing as m goes to infinity, right? So because both Bw and Bu could depend on m, and maybe they depend on m in different ways. Maybe Bw increase as m goes to infinity, and Bu probably decrease as m goes to infinity. So that's definitely a possibility, right? So I think the thing here is I'm choosing the scaling so that it's at least arguably fine to think of Bw and Bu to be constant. So why? The reason is that this is probably a little vague. So the ui is the contribution of each component, right? But w is the contribution of all the components. So in some sense, you are saying that the top layer, you control the contribution from all the components. And you want to say that that's the constant. You don't want that to grow as m goes to infinity because-- so basically, maybe one way to think about this is the following. So if you think about the scale, the scale over here does make some sense. Because ui is on the order of, let's say, a constant, and ui transposed is at least a constant that doesn't depend on m, right? So ui doesn't depend on m, and ui transpose x doesn't depend on m. Here, I'm writing this a little bit-- so here, theta could probably have some dependency on B. We only have a dependency on m, let's say So ui's are out of constant, and then you have sum of wi phi of ui transpose x, right? So each of this term is on order of constant, and your wi, the total contribution is constant. So that's why the total thing you can somewhat believe that this is on the order of constant. Because it's not like each of the wi's on order of constant. It's that the sum of the squares of them is on order of constant. So in some sense, you can believe that this whole thing is on order of constant, especially if you have-- I guess it depends on how you think about this. So if you replace wi by 1 over square root m, this is actually-- I guess depending on how you approximate this, roughly speaking, if you use Cauchy-Schwarz you're going to approximate by something like sum of wi squared, or 1/2 times sum of ui transpose x squared. Uh-huh. And this is something-- let me see. Why this is on order of constant? Maybe we're actually even more generous than that. So the L2 norm of w is a constant, but I think you can still make this bigger if all of them are correlated, right? OK, so I think this is-- I'm pretty sure the answer-- I should have answered, but I'm not-- I don't see I have a convincing answer right now. So maybe we can discuss offline for a few minutes. Yeah. But I think the scaling is chosen to be at least reasonably correct. Of course, you can still argue certain-- there's always a-- for example, depending on how w correlates with the squares, there's always some kind of flexibility. But I think the scaling is relatively OK. Anyway, but it's is a very good question because you can have misleading results if you are not very careful about the scale. OK. So let's see. OK. So maybe-- I have 15 minutes. I think I can prove the theorem in 15 minutes. So what we do is that we use the definition of the Rademacher complexity and gradually peel of the sup, so like we did before, right? We have a sup, and we have to somehow get rid of it. [INAUDIBLE] But difference by square root of m, so that's why I got-- I was thinking to use the argument, but I think it's not going to be right. Because sum of wi phi of ui transpose x, I think the most pessimistic bound, right, would be something like this is less than if you replace each of the wi by 1 over square root of m, and each of these bear a constant, you're going to have sum of this. And this will be square root of m. So then in some sense, this is not helping me to justify to use this scaling. Right. But if you believe that there's some consolation, so suppose you believe that there's a consolation here, then this would be something on the order of 1. So basically, if you want to-- so if you believe in there is a consolation, then it's a reasonable scaling. Or in other words, suppose you want to make the scaling even smaller, right? Suppose you want to say that I'm going to believe Bw is even smaller than the scaling I give, or Bu is even smaller, then you have to assume there's a strong correlation in your model. Otherwise, your model wouldn't even [INAUDIBLE] with something on order of 1 So whether you are willing to do that, so for example, suppose I tell you that this actually experience things, right? So then I have to convince you that I can choose Bw to be on the order of maybe 1, and then ui to be on order of 1 over square root of m. And then this wouldn't go-- the bond, indeed, would not grow as m goes to infinity, but you will find that sum of Bi and phi of ui transpose x, it's very difficult to be big. You have to match up everything to make it big enough to fit that label. So would you be willing to do that? I think you can arguably say that's not really realistic. OK, cool. So I guess let's prove this. So the proof, as I said, we're going to try to remove the sup in our definition of the Rademacher complexity step by step. So first of all, let's define v to be the post-activation intermediate layer. Let's define phi to be u times x, which is a m-dimensional vector, and you can also correspondingly define vi to be the corresponding activation for the ith example. This is in m dimension. And then using this notation, the Rademacher complexity on the empirical sum of the empirical Rademacher complexity is you expectation. The randomness is from sigma. And you take sup. You have sum of sigma i transposed times here to write f theta of xi. But f theta of xi I'm going to rewrite it as w transpose vi just because that's the notation. So let me just replace this here. w transpose vi. This is f theta of xi. And then here we take sup over two things, over both w and u. And the dependency on u is heading vi. And let's clean up this to put the 1 over n in front, and you have sup over u and sup over w, w transposed times 1 over n times sum of sigma i vi. I guess this probably looks familiar for you because we did something like this in the linear case, as well. And then you can get rid of the w, but you still have the u. So you sup over u, and you get rid of the w. w has L2 norm bound, so it's less than Bw. So you get the sup of this is equal to Bw times the L2 norm of sigma i vi. So now we've got rid of the vw. We can put vw in front. And now let's deal with the u, and the u is something like-- I think I shouldn't have 1 over n here anymore. My bad. So now this is i. This is a sum from 1 to n. And as we write this, let's plug in back the definition of vi, which is phi of u xi. And what I'm going to do is that I'm going to do kind of like a-- if you get familiar with this, you can see that this is a very loose way to do this. I'm going to replace the 2 norm by infinite norm. So I'm going to say that this is less than square root m times infinite norm of this. This is just because-- and then vector v 2 norm is less than square root m times the infinite norm. And so if v is in m dimensional, OK? And now the reason why I want to replace it by infinite norm can be seen later, can be seen now because somehow with infinite norm I can simplify the sup. So now I have a sup, and note that what this vector is, this vector-- maybe let's do something here. So this vector is the sum of a bunch of vectors, right? The infinity norm is above the dimension, the coordinates of this vector. So basically, each of these dimensions is E relay something like sum of sigma i phi uj xi. And the sum is over i. This is the jth dimension of this vector. So basically, I can take sup over j, and sup over u, and sum of sigma i phi uj transpose xi and for 1. And I can actually also write uj here because if I take the sup over j, the jth actually only depends on uj. And that's actually kind of the main reason why we want to use this infinity norm because once you write this, you found that all the j's are equivalent, right? Like anyway you are taking sup, right, so it doesn't matter whether it's uj, u1, u2, right, so the sup is the same. So this is equal to-- [INAUDIBLE] Is this an equality? Oh sorry, this is an inequality. So sup over u, a single vector u. So you replaced uj by u, and you say that this needs to less than Bu. Because uj, you used to have a bump Bu. Let's just skip it for simplicity. And then you can write this as i phi of u transpose xi. In some sense, you remove the m dependency because for infinity norm, how many m's don't matter. And now there is one step where I'm going to remove the absolute value. Because if you don't have the absolute value, it's kind of like-- let's first remove it. So by removing it, will pay a factor 2, and this requires something that is not exactly trivial, but I will not prove it in the interest of time. So you can remove this absolute value. And the reason-- it's in the lecture notes. The reason it's actually, fundamentally, it's pretty simple. It's just basically because the sup is actually mostly positive, like almost always positive. Because you can choose the u it's always positive. With or without absolute value, it doesn't really matter, at least for this case. Because you are taking the sup, right? So anyway, it's going to be positive. Because you can choose u to make this quantity positive. So this is what I-- I will ask you to refer to the lecture notes for the formal proof. And then now, after removing the absolute value, you can see that this is something like a Rademacher complexity of something simple. Because you can view this as your function now, and this is the Rademacher complexity of this kind of function. But still, you have phi and u, right? So that's why we are going to use the Lipschitz composition. So this will be less than 2. You copy all the constants. So this is by the Lipschitz-- I think the Lipschitz composition or the Talagrand lemma. Talagrand lemma. So I think in some sense, you think of-- maybe you can define something like, I guess, maybe H prime to be the family of u transpose x. And then you can also look at phi of H constant composed with H prime. So the Rademacher of phi composed with H prime is going to be equal to-- it's this quantity, right? And this is less than the Lipschitzness of phi, which is 1, ReLU and then you get H prime, which is this quantity. So that's how we do it. And now it becomes linear. So u transpose xi is a linear function class, and I think we have done this before. So for L2 norm constant linear class, I think you can get something like this is less than 2 square root m Bu over n times-- so this is phi w times Bu times square root sum of xi 2 norm squared. This is just by what we had for the linear model. For the loss of quality, you didn't put [INAUDIBLE].. Oh, sure. Yeah, sorry. My bad. Where the 2 come from? So the 2 come from here, this line. And this is something I didn't explain, right, so when you remove the absolute value. So how you get they are exactly the same without losing a 2, is that the question? I suspect it's possible, but I'm not 100% sure. So the proof in the lecture notes does lose 2, but it sounds like it's possible, right, you can save that factor 2. Because the intuition I had doesn't really tell you why you should lose anything, right? So my intuition is that this quality is just always positive. So the x value doesn't matter. So that intuition didn't tell you why you should lose the 2. Well, I will improve the things, I think-- at least the proof I figure out, I read from the book. Maybe I figured out myself, I lose the 2, so maybe it's because I didn't do exactly the right thing, exactly the right thing. OK, so I may-- and then the very last step, you can take the expectation of the empirical Rademacher complexity, and this is the expansion over s. Then you just get like what we did before. So the expectancy of this is less than C times square root of n, so you got this is bounded by 2 squared m phi w Bu times C over square root of 2n. That's because you use the Cauchy-Schwarz for this part. This is exactly the same as what we have done for the linear models. OK, so I guess this is a natural stopping point, and next time we're going to have a bound that somewhat improved on this so that you don't have the explicit dependency on m. Any questions? OK, I guess I'll see you on Wednesday. Sounds good. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_11_Alllayer_margin.txt | So last time we talked about the generalization bounds. And today we are going to talk about some better generalization bound for deep networks. So recall that last time what we did was that we show something like the Rademacher complexity is bonded by something like this, times some polynomial of the norms of the width, right? And we said that this come from a kind of a worst case bound for the Lipschitz-ness of the model. So and actually this is the worst case Lipschitz-ness with respect to input over the entire-- worst case over the entire input space. And this is because when we do the covering number, we have to use this Lipschitz decomposition-- this Lipschitz compensation lemma. And there you have to use the Lipschitz-ness for the entire set. Sorry, this is a little bit distracting in the light, just because I'm sharing the screen using my laptop so that I can charge my iPad. OK. So and we have discussed a few motivations for us to improve upon this theorem. So I guess we discussed four of them. And one of them I, guess I'll just briefly mention them. And what is one of them is that this boundless exponential in depth, which is bad. Because typically you have a lot of players. And another thing is that this is worst case Lipschitz-ness. And another thing is that typically you want to have something like SGD prefers Lipschitz models. That's good. But this is Lipschitz models. And where Lipschitz is on the empirical data. Because if you think about an algorithm, and an algorithm can only do something on the empirical data, right? So we'll show this more later in the course. But even if think about it, so on a high level, right? So the algorithm can only prefer something about empirical data, but not about the entire space, right? And also, we said that for tighter bond, we are going to have something of data dependent, something that depends on the Lipschitz-ness on the empirical data. So concretely, I guess today we're going to do is that we are going to show something like. This the generalization of parameter theta is a function of Lipschitz-ness of f theta on the empirical data x1 up to xn, and also the norm of theta. And its function is a polynomial. So that there's no exponential dependency. There is no [INAUDIBLE] So that's the goal of this lecture. And we will call it-- so and we have to define some kind of-- or introduce some new machinery to achieve these kind of things. And the reason is that this is a different type of bond than what we have done before. Because you can see that on the right-hand side, you have a function of the training data. So typically on the right-hand side, right, so the so-called classical uniform convergence, I guess what really uniform convergence really mean? That's slightly kind of debatable. Because it depends on how you scope it. But at least what we have discussed in this lecture, all the bounds are doing something like this, right? So the bounds before were all looked like for every f in some hypothesis class f, the empirical loss is less than something like a complexity measure of capital F over square root of n, something like this, right? So or maybe with high probability. Something like this. And or, alternatively, we can also achieve these kind of things. I think we-- implicitly we discussed this, right? So for every f, L of f is less than a complexity measure of little f over square root of n. So here this is capital F. So the first type is what we do exactly, what we got exactly from Rademacher complexity. Because you just apply Rademacher complexity on it. And this is in some sense the Rademacher complexity. And the second type, you can also get it by doing a little bit things from the first type. So you can get the second type by something like-- I guess this is a remark-- by considering F to be something like all the functions where the complexity of little f is less than capital C. Think of the complexity as focused on the norm of the width. You first define a hypothesis class where the norm of the weights is less than capital C. And then you apply 1 on all, on capital F on this hypothesis class. And then you do a union bound, and then take a union bound on all C, right? So for every capital C, it defines a hypothesis class. And you probably can write it as capital F sub C. So and for this capital F sub C, you can do the standard Rademacher complexity. And then you can say, I'm going to enumerate over all possible capital C and then do another layer of union bound on top of it. We never do this formally. But this is just one parameter you can just discretize whatever you want. So in some sense, this is how you get the type I type II bond. But the thing is that either of this bound, on the right-hand side the bound depends on the empirical data. It's always a property either of the model or of the function class. So the question is, how do you-- if you want to get something like this, like our goal today, you have to do something more-- you have to introduce some new techniques, right? So our goal is to get something like-- I think maybe let's call this-- I think we call this data-dependent bound, generalization bound. This term might be a little bit overused in certain cases. But what I mean here is that you want to have a bound that with high probability for every f, your population loss is less than some maybe complexity of f and the empirical data. So the right-hand side is also a random variable that depends on the empirical data. Of course, you're asking this for high probability anyway, right? So you're asking that for all-- with high probability over the choice of the empirical data, this inequality is true. And this is still useful in the sense that you can regularize the right-hand side. You can add the RHS as a regularizer. So not only this is an explanation in some sense, but also it can be used actively as a regularizer. Because the right-hand side is something you can optimize, right? So this is the goal that we are trying to achieve. So and in some sense, I think I used to have a little argument about why this is actually the right thing to do. It's kind of tricky, because these days, still there is no consensus on what exactly kind of generalization bound you are looking for. I believe that this is one thing that is good to have. But there could be other forms of generalization bounds. In some sense, you can argue that this is the best you can achieve in the sense that you cannot have a stronger one on the right-hand side. Because, for example, you cannot replace this empirical data by population distribution, right? If you replace that, then you can just choose-- suppose you allow the complex measure to depend on a population distribution. Suppose you allow that I can have complexity of f and the population distribution p. Then why not just define this to be lp of f? Sorry. Why not define this to be the population risk? What if you allow this, why not just define it to be something like x from p fx? So the population risk would be a good complex measure. Then in some sense, you lose the gist here in some sense. It becomes too trivial. And in some sense, that suggests that you are cheating in some sense by allowing the complex measure to depend on p. So in some sense, the fundamental question we are facing about this in the generalization bound is that you don't have access to the population distribution. You want to have an empirical measure for complexity. So that you can use that for regularization. Anyway, but this argument is kind of anyway debatable. So for now, we're just saying that this is one of the reasonable goals, right? So and why doing this is challenging? I think the first thing is that this is challenging because you cannot do the simple reduction as we have done before. So the reduction between I and II, type I and type II bounds doesn't work anymore. And, why? This is because let's give a try. For example, let's define capital F to be all the little f such that the complexity of f x. Suppose you say this is less than c. Suppose you define this, right? This is your hypothesis class. And let's say, suppose we attempt to use-- attempt is that you use f with Rademacher complexity for capital F. What's the issue? Why we cannot do this? The reason is that if your complex nature depends on the data, then your random hypothesis class also depends on data. Before your complex might depend on data, your hypothesis class is just a fixed hypothesis class. So now, it's a hypothesis class that depends on data. So f is also a random variable depending on data. Data means empirical data. And then you can use the Rademacher complexity. The theorem for Rademacher complexity, for why the Rademacher complexity bonds the generalization error, that theorem requires the capital F to be a fixed hypothesis class that is fixed before you draw the random data. So that's the challenge. OK? And how do we address this? So in some sense, the way to-- the high level way to address it is to redefine, or you have to have a refined way to think about uniform convergence. So some refined uniform convergence. This is not going to be exactly what we do eventually. Because what we do eventually will be something very clean and doesn't have any kind of a subtlety. But this is the rough thinking how do you think about it. So maybe let's make assumption. This assumption, suppose the complexity measure is separable in the sense that this complexity of f on the empirical example is of some form like g of fxi, right? It's really some function of f and xi and you take the sum of them. So suppose in this special case, then you can think of-- essentially what we are doing is that we are considering-- then we can consider the augmented loss. So you can define something like l tilde f is equal to something like lf times the indicator that this complexity is less than c. So in some sense, what you are doing here is that you are changing the loss function in some way so that it's easier for you to use the existing bound. So before, for example, let's say, the mental picture I have in mind is something like you have a loss function, which is something like this. Maybe let's say this is the empirical loss. And have some region. And this is the region where you have low complexity. So but this region is a random region. Because the low complexity-- the definition of low complexity depends on data. So this is random. So that's why you cannot use the uniform convergence only on this low complexity region, right? So you cannot say that I'm only going to apply my uniform convergence for this region. Even though that's your goal, but you cannot apply the Rademacher complexity theory. So kind of this augmented loss, what is fundamental is doing something like it change the geometry outside the low complex region. So you, for example, you just defined a new loss function to be 0 here. And then, the same thing as it was in the low complex region. So now we have a globally defined loss function. And so, basically, the region that you are taking union bond over, right, the hypothesis class is still the same. But you change the loss function. So if you do this, then you can hope to-- so can hope to apply existing tools on l tilde of f. And l tilde of f is sometimes kind of like a filtering thing that filters the low complexity. But you don't do it in a technical way. Technically, you are just changing the loss function. That's the only thing we do. But the effect of it is the same as you change the hypothesis class. So I think this is the first thing-- this is the first attempt that we have done in all of our paper and we try to address this. And this is actually the fundamental idea in some sense. So you change your loss function so that you can deal with different type of quantities or different regions of the hypothesis. And then later, so this is one of the paper we had in, I think, 2019. And we got some results. And if you exactly do this indicator thing where you change the loss like this, you can already get something. But the results are messy. So then we kind of in some sense think a bit more broadly, right? So in some sense, all this is doing is change the loss function, right? So you are trying to have a surrogate loss. And the surrogate loss, we are not actually unfamiliar with it. Where we have used the surrogate loss in the margin case. But it's just the surrogate loss there is the simplest way. The simplest is surrogate loss. So basically, what I'm going to talk about today in the main part is this so-called order margin, which is a different way of-- it's kind of like a surrogate margin. And once you have this kind of a fake margin, this is kind of, in some sense defining a new loss function for you. And once you have this new loss function, you can do everything in your super clean way. And then you can apply the existing tools in some sense. So this is a sketchy, a vague introduction. I'm not sure whether there are any questions so far? What do you mean by all layering? Oh, sorry. This is a-- yeah. This is the name of the thing we are going to introduce. But we are going to introduce a new margin, which we call it all-layer margin. Yeah. I probably should define that formally. So we [INAUDIBLE]? So basically, the midpoint I'm doing-- I'm saying here is that we are going to define a surrogate loss. And using the surrogate loss-- the point of the surrogate loss is to change our original loss so that you can focus on an important part of the space. And the surrogate loss will be basically boring for this high complexity part. But they are just-- they are not doing anything. They are basically zero one loss in some sense. And so, that's the general intuition. OK. So now let's see how do we do that exactly. So we're going to start with a generalization of margin. So let f-- so this is a classification model. So typically, you just threshold f and get 0, 1. And your margin is just f itself, right? So the typical margin, the classic-- the standard margin is just defined-- the standard margin is just equals to y times fx. y is between plus or minus 1. That's what we used before. And now, I'm going to define a so-called generalized margin. We say gf, xy is a generalized margin if satisfies the following two property. So the first property is that gf, xy is 0 if you classify correctly. I think I have a typo here. Let me think. Sorry. I think if you classify wrongly, and this will be larger than 0 if fxy is classified correctly. So let me mark this. This important typo. So and you can see that this is trying to imitate the standard margin. For the standard margin is bigger than 0 if you classify correctly. And otherwise, you say you zero it out. So that's, in external margin, also this is only defined for correct classification. So in some sense, you can extend to incorrect classification just by extending it to 0. And so, and we say that-- and there's another small thing. Which is that we have to define the so-called infinite covering number. So this is defined to be l infinity epsilon f is the-- this is a small technical extension of the l2 covering number. It's not that important in most of the cases. It just makes in some sense-- in some cases, it makes the definition cleaner. And in some cases, it makes the proof a little bit easier. So l infinity covering number is the minimum covering size with respect to the matrix rho, where rho is defined to be this l infinity norm. So basically, you say that-- you look at the entire space of the input of f. And you look at a difference between fx and f prime x, and you take the sup. So basically, this is the f minus f prime infinity node. So given these two, what we will say is that our lemma will be that-- with the, you can have a analogous theory, analogous to the modern theory, where you use this generalized margin and also the infinite covering number. Actually, you can even do it with l2, like the standard covering number. It's just easier to state with the infinite covering number. And also, maybe before doing that, let me also have another remark, which is that this infinite covering number is larger than the standard l2 covering. This is just because this is the more demanding notion. Because you are demanding that f and f prime are closed at every possible input. And before, you are demanding that f and f prime are closed on the empirical data. So this is because the matrix that we used before was the matrix that is smaller then the matrix used in the infinite case. So with this small extension, what we're going to do is that we're going to say, actually, you can have analogous margins here with the generalized margin. So the lemma is that, so suppose gf is a generalized margin. And let this capital G to be the family of gf, where f is ranging over the capital F. And suppose, recall that this is like what we are-- this is in some sense just a slightly more complex version of your model hypothesis, right? If you just use yfx, then this will just be y times fx as the hypothesis class, as the class G. And this is a little more general than that. And suppose for some R, the covering number, the infinite covering number of G is less than R square over epsilon square for epsilon and 0, for any epsilon larger than 0. Suppose you have this 1 over epsilon square decay in the low covering number. Recall that this is one of the regime that is good. So this is actually the worst regime we can tolerate when we do the Rademacher complex theory. So and suppose you have this. Then with probability larger than 1 minus delta, delta is the failure probability, which will be hidden in the logarithmic over the randomness training data for every f in capital F that correctly predicts our training example, right? So for margin, we always-- in the margin theory, we always consider functions that can correct it. But if other examples, then you have the 0, 1 error is less than of tilde of 1 over square root of n times 1 over the minimum generalized margin, plus O tilde of 1 over square root. So recall that basically before, what we had was-- oh, there's an R here, sorry. So before what we had was that here you have the standard margin, the minimum margin over the entire data set. And here, R is the complexity of the model hypothesis class, right? And all the other things are the same. Now, the change is that now here you replace it by generalized margin. And R becomes the hypothesis-- the complexity of the hypothesis class of this generalized margin Gf, right? And the complexity is measured slightly differently. We are using the covering number. But actually, you can also use Rademacher complexity here. It's the same. I'm just stating it so that it's easier for the future part. And this bond is actually not very tight. You can actually improve this bond in some ways. But this is the simplest version. And the proof of this is basically, it's just we just basically reuse all what we have done with margin theory. It's just everything seems to just transfer exactly. So just to replace-- in some sense, the proof is just replace the F by G in the margin theory. I will do this step by step. But this is a short version. So technically what you do is still use the ramp loss. Recall that the ramp loss was the loss function that looks like this. This is a gamma, this part is gamma, this part is 1, something like this. And recall that before, after we have this ramp loss, we define this surrogate loss. So we define a surrogate loss l hat gamma theta to be-- before we just applied the model. But now we use the generalized margin. Before here this was just f theta. But now it becomes like G of f theta-- G sub f theta. And we can also define the surrogate population loss, which is just the expectation of the empirical loss. So and before what we did is that we use the Rademacher complexity to control the differences of this true loss function. We said that you take l gamma theta is minus l hat gamma theta, is less than the empirical Rademacher complexity of f. That's what we did before. But now it's the-- sorry. Before we did the empirical Rademacher complexity of l gamma composed with f. And now it's l gamma composed with g because the function class-- the function is different. Plus O to the 1 over square root of 1. [INAUDIBLE] Sorry? [INAUDIBLE] Oh, thank you so much. Yeah, that's a-- thanks. So I would just switch to this. I only have one charger. But yeah. You need another charger? I have one-- No, I think it's the problem is that when you use this, I cannot charge. Oh. Right? Oh, but I can-- it doesn't matter how it-- yeah. It's not a charger, it's the plug, the hole. Yeah. OK, so now it works? OK, good. Thanks. OK, cool. So now we have to use the Rademacher complexity. And then, the Rademacher more complexity is less than the covering number, right? So I guess, maybe let's still do the covering number. So covering number, let's do some preparation. So we assume the infinite covering number. But actually, it's, I guess, let's say, so the covering number, the standard covering number l gamma composed with G, the l2 pn this is less than the standard covering number where you use the-- by removing the l gamma. So l gamma-- so l2 pn. Because this step is using the Lipschitz-ness of l gamma. So it's actually 1 over gamma Lipschitz-ness. So this is using the Lipschitz-ness of the covering number. And now, next you say it is also bounded by the infinitive version. And then, the infinitive version we have an assumption. The assumption was that for every epsilon, this is less than R squared over epsilon square, gamma square. The last type is that assumption. So you can see that actually, even suppose you assume something about this, then is also fine. If you assume something about-- so you don't have to literally use the infinite node. So and then, because this low covering number is less than this, and we have this kind of translation, right, so that if you translate a log covering number to Rademacher complexity, you got Rs l gamma composed with G. It's less than O tilde of R over gamma square root over gamma square root n, right? This is by chaining the [INAUDIBLE] theorem and its consequences. Because we have discussed what kind of covering numbers to implies, what kind of Rademacher complexity, right? So and then, the same thing, I guess, take gamma to be gamma min, which is the min over i, G i f, and yi, right? So and then, this step is not formed. There are some caveat here. Because gamma is a random variable, you have to do union bound eventually. But let me not get into it. I guess we had this issue before as well. But it's only one number you can discretize and do union bound over gamma. But suppose, let's say we just take gamma to be gamma min. And then, L hat gamma min is 0. So then you got L01 theta, then 0 plus O tilde of R over square root n times gamma min, plus O tilde of 1 over square root. OK. So this proof is not 100% formal just because the technical-- I am not allowed to take gamma to be anything that depends on the data. So I have to really show it for every gamma. And that requires another union bound over gamma. OK. Any questions? So maybe, let's see what we have achieved with this lemma. What we achieve with this lemma is that now if you define your-- basically you can put everything in this generalized margin. This generalized margin in some sense is a way to twist your model output. So you can stretch the model output for certain f. And you can squeeze it for certain other f. So in some sense, this is what we actually will do, right? So we, in some sense, stretched the function for those places where-- you see how do it. You stretch the function according to where you are at. And so, basically everything is folded into this generalized margin. And the question is, so the question now is that question. So for what gf you can bound the generalized error-- you can bound the covering number of g, right? And also, you want this gf to be somewhat meaningful, so on and so forth. So and suppose if you just take gf to be the standard one, yfx, then the covering number of this gf will be the same as covering number of f. And it will be-- then the Rademacher complexity will be something like then the covering number depends on the product. So but we are trying to do better than this. OK. So how do we do this? So now we define this so-called all-layer margin. This is a special instance of this gf. This is a concrete definition of gf for which we can bond the Rademacher or the covering number complexity. So to define this all-layer margin, this generalized margin, so we have to actually introduce some notations. So we are considering some perturbed model. So I guess, I think-- sorry, one moment. Maybe I think actually it's useful to have some motivations before I defined. I though I'd try this. So our motivation is the following. So if you think about the linear model, and the margin is defined to be-- the margin, the standard margin, so the normalized margin, so normalized margin is defined to be something like y times fx over the norm of maybe setting your model is double transpose x. So your margin is defined to be y times the model output over the 2 norm of w, right? So this is the normalized margin, which is something that governs the generalization performance. And the question is, how do you normalize, right? So if you have deep model, then you can try to normalize by something, right? So if you have a deep model, so one attempt is that you can try to normalize by some quantity. Maybe this could be the product of the Lipschitz-ness or maybe something else. So that's the natural attempt. And in some sense, all the previous work is in some sense doing this. You are normalizing the margin based on the worst case Lipschitz-ness. So and what we are doing is that we don't know-- we don't want to normalize by only a constant that depends only on the function class. So we take a different approach. What we do is we say, we interpret the standard margin by something else. So we have another interpretation. So our interpretation is that you can view this as something like minimum delta such that w plus delta-- sorry w times x plus delta y is less than 0. So you are trying to find the minimum perturbation of your data point such that after perturbed it, you can cross the boundary. So intuitively, this is also kind of right. Because the margin is the distance to the boundary. So it's also the same thing as how much you can perturb it so that you can cross the boundary. So this is the kind of perspective we take to generalize the margin for all-- for deep models. So if you take this, there is some kind of a small-- if you do the exact math maybe something doesn't match exactly. But this is kind of like the rough intuition about it. And how do we do this exactly? So for deep models, we are still trying to take this perturbation-based perspective. But we have to perturb-- it turns out we have to perturb all the layers, not only the input. So the first attempt we tried is that you just perturb the input. You try to see what is the smallest perturbation of the input so that you can change the decision of your model, right? But that just technically doesn't work. It doesn't seem to capture the fundamental complexity. So we have to consider this perturbed model that perturbs all the layers. So what we do is we have a perturbation delta which is a consequence of perturbation delta 1 after delta R. And each delta i is a vector. And the way you perturb is the following. You also have to work out the normalization in the right way. So you first perturb the first layer. So the first layer used to be w1 transpose x, w1 times x in a deep net. And you perturb that by adding delta 1 which is a vector, times norm of x, or 2 norm of x. And then you perturb the second layer. So how do you perturb the second layer? You first apply w2 on the first layer, on the perturbed version of the first layer. And then you perturb it furthermore with delta 2. And how much you perturb with what's the scaling in front of delta 2? Delta 2 is a vector. The scaling is the norm of the first layer. So how do you exactly design this perturbation is a little bit tricky, right? So we tried various versions in our research. And it turns out this is actually make everything fit nicely. So you can do this for multiple layers. And then, eventually you have this hR, the R'th layer perturbed layer is equals to you first apply the matrix multiplication and nonlinearity on your previous perturbed layer. And then you perturb it by vector delta R scaled by the norm of the previous layer. And after you find this perturbation, you can ask, what's the smallest perturbation that changed my decision? So you can-- and that's the definition of the all-layer margin, which we call mf, xy. This is the all-layer margin. It's defined to be the minimum perturbation. And how do you measure the size of the perturbation? You measure it by the sum of the 2 norm of the perturbation of every layer. And your constraint is that after perturb, I guess you call this fx delta. This is the perturbation of the whole model. fx delta auto after perturbation times y it becomes inactive. So incorrect perturbation. You can also do this format here on labels. But it's essentially the same. So I'm doing binary labels. So this is the definition of the all-layer margin. So you can see that the definition becomes much more complicated. But then the proof of it will be easy. And I guess you can also intuitively interpret this. So mf xy, so in some sense this is big if it's hard to perturb. So if it's hard to perturb, it's hard to change. It's hard to change decisions of the network. And how could it be hard? I think they are the two ways to make it hard to perturb. So one thing is that the model f is Lipschitz. And this means that it is very Lipschitz. So this means that you have to perturb a lot to make a big difference or feel a big change of your model output, right? So and another possibility is that your fx just is large. And sometimes your standard margin is large. If a standard margin is large, you have to change a lot, right? You also have to change a lot. Because before you're outputting something like positive, fx is very big. And now you have to change it to another side of the boundary. So then you have to perturb a lot, right? So or maybe I could say yfx, y times fx is large. So typically, also I always talk about y is 1. So positive means that you are very confident about your prediction. And if you are very confident, then it means you have to change-- perturb a lot so that you can change your mind, right, so that the model can change its mind. So and here, Lipschitz, technically, this is Lipschitz in the intermediate variables-- intermediate layers. Because you are measuring how robust it is to perturbation. But the perturbation is done on the intermediate layers. But the Lipschitz in the intermediate layers, it turns out that it's actually close to Lipschitz-ness with respect to parameters. I'll discuss that in a moment. So and once you have all of this. So then you can have the following theorem. So this is saying that with high probability of l 0 1f is 0 I error of f is less than O tilde of the following. We have 1 over square root 10 first. And then you have sum of-- this is the so-called 1-1 norm of w, which I'm going to define in a moment. And also minimum i, mf, xi, yi, plus O tilde of R over square root, where just sum of absolute values matches w. I guess, in some sense we are in the mindset that anything polynomial in the norm doesn't really matter, so doesn't matter that much. So this is in some sense you just consider as polynomial. But of course, you can also talk about whether it's 1 to 1, 1 norm is the right choice of the norm. In some sense, this is not the best norm we can hope for. So there is still some room for improvement here. But I guess, suppose you ignore the anything polynomial norm. So what's important here is this all-layer margin here. So basically, this is saying that if the all-layer margin is always big. The utilization is good. If the all-layer margin is small, then your generalization is bad. And what's all-layer margin? All-layer margin is about the perturbation robustness so the intermediate layers. So this is saying that if you are robust to perturbations in intermediate layers, and that implies that you have good generalization. And you can also compare this with the bond that we got before. You can pretty much argue that this is strictly better than before. So basically, so compare-- is this the right place for us to discuss this? I guess, let me discuss this comparing with the previous bonds later when I'm doing all the kind of remarks about this theorem. But you can show that this is better than the previous one, mostly just because of this mf-- in some sense with mf, xy it's kind of roughly speaking, you can think of this as-- so in the worst case, I think this is small. This may be the smaller than over fx, something like this. So because this is a Lipschitz-ness and this is how much you have to change your output. You have to change your outlook from fx to 0, right? So and this is a Lipschitz-ness. So that's why you have to change-- you have to make a big movement to change it, to change fx from something like positive or negative to 0. And wait, my bad. I think I-- sorry about that. I think I'm doing a-- should be this. And I said, that's why this is better than the previous bond, because the previous bond didn't consider the different Lipschitz-ness at different data point. But here, you are really talking about if your Lipschitz-ness at the data point you have seen, then you can generalize well. But maybe let me discuss this more. I think let me have a more thorough discussion about this later. I just don't want to-- I want to have a-- show a little bit of this just so that you don't feel like this is a useless bound. But maybe bear with me and just assume this is useful. And then we can discuss all the interpretations. Any question so far? How well I'm doing on time? OK I guess I only have 30 minutes. So let's just dive into the proof. So I guess the proof requires a few steps. But a few small steps. So first of all, it suffices to bound this N infinite epsilon g by O of this. But I think I have some-- sorry. I have some typos here. Don't-- I think this should be this. Yeah. This should be this. I'll double-check later. Because it's always a polynomial, so I didn't really pay too much attention. But I think this is a typo. So I think you only have to show this. Sorry, I don't know. I don't really what this-- I will send a square note taker clarification about this. I think, I don't know exactly why that square is applied inside or outside. But either way, you have to show some bound like this. So let's assume this is the correct bond, and then you basically have to show something like this. Because if you have this, then you can use the lemma before. And then on the generalized margin, you get this Rademacher-- this generalized bound, right? So essentially, we just have to bound the covering number of g. And it turns out that the covering number of g, you have this very nice decomposition lemma. So let's say that fi define each layer, the hypothesis class for every layer. And we also constrain that wi1, 1 norm is less than beta i, OK? So then, your f is really fr composed with fr minus 1 up to f1. This is the notation we have used. And we recall that we had a kind of a decomposition lemma before, which was kind of complicated, right? So you have all of these dependencies and how the arrow propagates. But now the lemma is pretty simple. So let m composed with f denote it's family of the all-layer margin. And then consider then you have that, the log of the infinity covering number, where the radius is just simply the sum of-- the average, in some sense a quadratic average of the radius on each layer. And you care about the generalized margin. This is less than the sum of the log infinity norm covering number of epsilon i f to fi. So in some sense, this is saying that you only have to deal with the covering number for every layer. And then you've got a covering number for the composed function class. But you don't get the covering number for the compose function class exactly. You get the covering number of the all-layer margin of the composed function class. So and here, is n infinity epsilon i fi is defined with respect to the input. So there is an input domain to define this covering number, right? So the input domain, which is the one-- the 2 norm bond is less than 1. And so, I guess the most important thing is that this is not a-- this is n, this is the earlier margin, OK? So and the corollary is that if each of the layer you can bound the covering number by something like ci square over epsilon i square, suppose you can bound this. Let's use little c here. So then, take epsilon to be epsilon times ci over sum of square root, ci square. So i is equal to this. Then we have log epsilon-- you have the carbon number of the compose model is less than sum of ci square over epsilon squared. So which means that suppose you believe ci is a complex measure for each of the layer. Then you can get the complexity for the composed model, the all-layer margin of the composed model, the complexity will be just the sum of ci squared. Yeah. I think. I know-- I didn't have error here. I think this is indeed something like this. Yeah, this was correct. Sorry. OK. Cool. So and the ci will be something-- like ci will be something like the wi1 comma 1 norm. And that's how you go through all these things, right? So basically, we can show-- I think I will improve this. But this is not-- this is because this is for one layer, right? So we assume you can basically you can believe that you can basically invoke a theorem for a linear model to get this so indeed it's true. So for linear models you get something like this. And beta i was the bound, on the, sorry, call it beta i was the bound on the 1, 1 norm of wi. And this will imply that min theorem. OK. So I think I hope I convinced you that basically as long as you prove this decomposition lemma, then you are done. Because you, for the right-hand side you invoke something about linear model. And then you plug in this lemma, and you get the covering number bonds for the all-layer margin. And then you get the original theorem using the lemma I have shown before for the generalized margin. OK? So any questions so far? OK. So now let's prove the lemma, the decomposition lemma. So we always-- so I guess I only stated lemma for the concrete fi, right? Which is the z map to sigma yiz. You can also have-- you can state the lemma in a more general form. And also you can prove it in a more general form. But I'm only going to prove it for this particular family of fi. And so, the first step is that-- so we'll show-- so there are two steps. Step one, we show that mf, mx is 1 Lipschitz in f. And what 1 Lipschitz in f means is the following. So for every f and f prime, mf xy minus f, f prime, xy it's less than in some sense the Lipschitz-ness of every layer. So you have all-layers. So this is sum from 1 to R. And you take the max of fix minus fi prime x to node. X [INAUDIBLE]. So here, f is equals to fr, composed with fr minus 1 up to f1. And f2 prime is fr prime composed with fr minus 1, f1 prime. So basically, the Lipschitz-ness of this all-layer margin is something that doesn't have actual scale in some sense because you are looking at this ball with no scale. And also, it only depends-- basically on sum of the Lipschitz-ness, or sum of the differences between f1 and fy prime. So there's no multiplier here. You are not multiplying on the Lipschitz-ness of f. So it's really literally a sum. It's very clean. And let's prove this step one in a moment. But suppose you have step one. Then what you can get is that you can use step two. You can use the step one just to get the theorem relatively easily. What you do is you say, now you construct a cover. Construct a cover. And how do you do that? The cover is-- the construction is also kind of trivial. So what you do is you let U1 up to UR are be epsilon 1, epsilon R covered of f1 up to fr respectively. And recall that if you still remember what we did last time, the covering was very complicated. So you iteratively construct covers and make it very complicated. But now we just individually construct covers for every fi. And then, and we say ui such that ui is equal to this the infinite norm covering number, epsilon f, fi. And then, so this means that by definition, so we got that for every fi and capital Fi, there exists some function ui in capital Ui, such that fi minus ui it's smal, right? So and I guess we are using this matrix, this infinity norm matrix, f ix minus uix 2 norm is more than epsilon i. This is we know by definition. And now, we're going to turn this into a cover for the compose the family. So and the cover is just, so we just take the U to be the family of just the composition of all of this, the composition of UR composed with UR minus 1, composed with U1, which is-- and this will be our cover. And we'll show this-- we'll be showing will be our cover for m composed with f. And why that's the case? This is because suppose we are given f is equal to fr composed up to F1 in capital F. Then let u1 up to ur up to u1 be the nearest neighbor of fr up to f1. So then, as you can see that using the Lipschitz-ness, this minus n, let's say u is equals to ur composed with u ms1 up to u1. so you get this. So suppose you do this. This is less than using our step one. Sum of the difference between fi and ui, the worst case difference between them over the norm bar 1. And because f and u are close, that's how we constructed the cover. So we get square root sum of epsilon i square from 1 to r. OK. So basically, once you have such a nice Lipschitz-ness property, then you can just cover everything individually. And you don't have to think too much about the composition. The composition is trivial, because this deal with this, is dealt with by this. So now, why the Lipschitz-ness holds? So let's prove step one. So and so we only proved the upper bounds. So we only prove this by symmetry. But because f and f prime have the same row, so you can-- you only have to prove one side. And you can flip them to the other side. And it's sometimes the way to prove it is just really-- each of them is defined by some optimization program, right? And they are the solutions. The optimal value of the optimization programs, right? Basically, you are trying to show that two optimization programs are doing similar stuff. And how do you do that? Typically, you construct optimal solution of one optimization program into a feasible solution to another optimization program. That's how you relate to optimization programs. So let delta 1 star up to delta R star be the optimal choice of delta in defining mf, x of f. So and we want to turn-- so our goal is to turn this into delta 1 hat, that R gamma of feasible solution of mf prime xy. So if it's a feasible solution, you get mf prime xy is less than sum of delta i hat square. And then you can relate this to some of the other i using your construction, all right? OK. So that's the rough idea. And how do we construct this the other one hat up to get an r hat? So we wanted it to be feasible so that we can have this in an inequality. This part is going to be the feasibility part. So basically, the way we do it is that we want to construct-- so we want to make f prime with delta 1 hat up to delta r hat doing the same thing as f and with delta 1 star up to delta r star. So basically you want the perturbation on f prime to do the same thing as perturbation-- the perturbation of delta i star on f. So that, then you know that this will be a feasible solution. Because what's the feasibility? The feasibility is about whether you perturbed the prediction to the other side. So if this one can perturb the prediction, the change the prediction to the other side, then the other one can also change the prediction because they're doing the same thing. That's the principal and how do you do that is just pretty much just algebra. So f has parameter w1 of wr. And f prime has parameter w1 prime up to wr prime. And let's consider the computation. So I guess the computation is this. So h1 1 is equal to w1x plus delta 1 star x. h2 is equal to sigma w2h1 plus delta 2 star h1. So on and so forth. hr is equal to sigma wr hr minus 1 plus delta R star hr minus 1. So this is the computation you did for mf, xy, right? So and I want to imitate this computation by perturbing f prime in some way. And how do we imitate that? So the imitation is kind of trivial. So imitate this. So what you do is you say, you take-- so for f prime, what happens? h1 is equal to w1 prime x plus something. Plus and how do you-- so and you suppose you predict the delta 1 star x, they wouldn't have. Because this w1 prime is different from w1, right? So you have to predict something in addition to make this computation the same as before. And how do you do that is you perturb in addition w1 minus w1 prime x. And then, these two are literally exactly the same. So basically, you declare this as a new perturbation. You declare this has to be delta 1 hat times x 2 norm. So basically, you compensate the difference between w1, w1 prime by adding this additional perturbation. So that means that delta 1 hat is equal to delta 1 star plus w1 minus w1 prime x over 2 norm of x. And you do the same thing basically for every layer. So now h2, you want h2 to be equal to the same h2 above. But your first step is you're only perturbing based on w1 prime. You are not perturbing based on w2. Sorry, you are only perturbing based on w2 prime, but not based on w2. So what you do is you first perturb the original one, the other 2 star h1. And then you compensate the dif by perturbing even more. Something like this. And you declare this entire thing will be defined to be the other two, h1 2 norm. And that means that the other 2, delta 2 hat, the 2 hat will be delta 1 2 star plus something like h1 2 norm. And the denominator will be sigma w2h1 minus sigma 2w2 primeh1. I guess you do the same thing for every layer. And in general, you just take the other i hat to be delta i star plus sigma w1 h i minus 1, minus sigma wi prime h i minus 1, over h i minus 1 2 norm. And now, we got our goal. So basically, delta 1 hat on f, on f prime, is the same doing the same thing as delta 1 up to delta R on f. I'm using these shorthand just to save some time in writing. So I'm saying that, basically I'm saying you perturb the delta 1 hat up to delta r hat from f prime it's doing exactly the same functionality, the same prediction, as the other one. So that means that this is a feasible solution. This is a feasible solution for mf prime. So that's why mf prime xy is less than the sum of delta i hat 2 norm square, square root. And now, this is equals to I guess I'm going to bond this by square root sum of delta 1 star 2 norm squared plus square root sum of the differences between them. And this is using the so-called Minkowski-- I guess, I always think of it as Cauchy-Schwarz but I think there's a technical name, which is called Minkowski inequality. So what it's doing is that the Minkowski inequality is saying the following. So if you look at square root of the sum of ai plus bi 2 norm square, this is less than square root sum of ai square, and square root sum of bi 2 norm square. So this is the Minkowski inequality. And actually, you can prove this inequality by Cauchy-Schwarz by just x-- taking the square on both sides. And cancel a bunch of terms, and it becomes Cauchy-Schwarz. All right. So we apply this when-- where ai is delta i star, and bi is this thing, is the difference between them. I think 5% is enough for me. Yeah. Cool. So yeah. One minute per percentage. OK. So now, let's see. So and this one is the mf of xy. And the other one, you can bound it by-- this is, I guess this is literally you can bound this by square root sum over i from 1 to r, the max over x in 1 norm and sigma wx minus sigma a prime x square. Just because this whole thing is homogeneous. So dividing by the 2 norm is the same as restricting 2 norm to be 1, right? So and then this is equal to mf xy plus square root sum over r max x 2 norm less than 1, f ix minus fi prime x squared. And this is what we wanted for step one. Any questions? [INAUDIBLE] w and w prime? Yeah. So w and w prime-- w is the parameter for f, and w prime is the parameter for f prime. And they don't have-- at least in this context, they don't have any relationship. Because I'm just trying to show this step one. So I'm taking two arbitrary f and f prime. And I want to say that the all-layer margin difference, difference in all-layer margin is bonded by the difference in each of the layers. So it doesn't matter what they are. [INAUDIBLE] Yes. So f prime involves all the wi primes, and f involves all the wi's. Cool. Any other questions? It feels like this proposition that we just proved [INAUDIBLE] Can you say again? [INAUDIBLE] Yeah. So this-- if I'm guessing the question, I think all of this depends on the definition of mf, yes, of course. And it's actually, when we do the research, I think it's that we are trying to meet in the middle. You have to change the definition in a way so that the analysis is OK. But in some sense, this is-- in some sense because the proof is simple and clean, so somehow I feel good about the definition to some extent. Yeah. So I guess I'll use the next few minutes and the 4% of battery to talk about some of the comparisons, interpretations, and our next possible extensions. So I think, I guess interpretation, I kind of in some sense I've discussed this a little bit. So the most important thing is the all-layer margin part, at least that's our side. And we don't even care about the norm. So then you can compare with Bartlett at all '17 is the paper that we discussed last time. So you can formally do this, or you can formally say that the perturbed model, if you look at the difference between the perturbed model and the original model, the difference is something like if you do some kind of telescoping thing. This is supposed to be not super hard. So you can basically imagine that for every layer you perturb something, so you pay something like that. And then you also have to pay the blowing up factor because of the other things. So you can prove this. So if you ignoring some minor details, which allows me to have a cleaner exposition. So for example, you ignore the dependency on r, then you can basically say that if-- maybe let's also suppose y is bigger than 0, just for simplicity. Say y is 1. Then basically, if you want fx to be bigger than 0, and fx plus delta to be less than 0, that's kind of what the situation would be. You perturb the model to predict the wrong thing. And then, this means that your delta needs to be basically something larger than product of the spectral norm. Because at least, because that's how you can make enough difference. So if your delta is too small, then the right-hand side will be too small so that you don't really make a big enough difference. So times fx. So that's just saying that basically in some sense this is saying that mf xy over y times fx, the new margin versus the old margin, the ratio is something like this. I'm writing this in somewhat informal way. So I'm ignoring constant or even ignoring some small minor details. I think this product probably shouldn't probably range from 1 to r, it should miss some terms in the middle. But those are not super important. And this is basically saying that if you look at the inverse margin, this is kind of like f, fx times the product of the spectral norm. So this is indeed a better bound than before. Because our new bound depends on this and old bond depends on this, on the right-hand side times the spectral norm. So this is a better bound. At least in this aspect. And another thing is that later-- but why this is, how much better it is right, compared to the previous one? That's a question mark. So is it true that your all-layer marginal becomes polynomial instead of exponential? There are some indicators that this is a much better bound empirically, or conceptually. Empirically, we did verify it seems to be much better. The number, it becomes smaller. Just because empirically your Lipschitz is better than the worst case bond. And another reason why you can somewhat hope that the empirical-- this is better is because later we will show-- I think I've said this once before. But let me write it down again. So SGD prefers Lipschitz solutions, and in some sense-- on the data points, and Lipschitz on the data points. In some sense this is saying that your algorithm in some sense is minimizing the Lipschitz-ness on the data point. So that's why your Lipschitz-ness on the data point is probably better than the worst case Lipschitz-ness over the entire domain. And that's probably why the gap between these two bonds are significant. So in some sense, this is saying you are implicitly minimizing-- maximizing the all-layer margin. So but of course, this is approximately. Because all of this what SGD prefers-- in terms of the form, we will see it's similar. But they won't be exactly matching the same form. So we haven't got a kind of fully coherent theory yet. But conceptually, it all seems to roughly match. And another thing is that there is something which actually people actually use in practice, which is called SAM. This is called sharpness aware regularization. This is something that can let you two get better performance empirically on many data sets. And what they are doing is that they are doing perturbation. So we are doing a perturbation. But they are also doing a perturbation, but they are perturbing the parameter theta. So they are trying to make this model more Lipschitz in the parameter theta instead of more Lipschitz in the hidden variable, the intermediate variable h i's. But actually, these two are very related. So here is a fact. If you look at the loss, the gradient of the loss respect to the parameter wi, this is equals to the gradient of loss respect to-- I'm always-- this is on a single example, always on a single example. This is the equals to the gradient of the loss respects to the hidden variables in the layer above and times the hidden variable transposed. So this is just by derivation. Actually, this is used to called-- wait, I'm blanking on the name. In neuroscience, there's actually a term for this thing. But this is just literally, just you compute a gradient of wi, how you do it, you use change 1, you get this. So here, this is the gradient respect r1, this is the size of the environment. So if you look-- so that's why if you look at the norm of the gradient in respect to the perimeter, then it's quite related to the norm of the gradient with respect to hidden variable. This is a vector, this is a vector, and this is a matrix. So that's why this is true. So Lipschitz-ness in parameter is similar to Lipschitz-ness in hidden variable. It's somewhat related. So the last thing-- I guess I'm running out of time. Sorry. So this is a more general version, where you don't have to care about the minimum margin over the entire data set. You can prove something like test error is less than 1 over square root 1 times-- instead of the average, this is average margin instead of the worst case margin-- the minimum margin over the data set. So you look at average inverse margin of this form square. And then times the sum of complexes of each layer. So and plus low order term. Oh, really? Oh, of course. It can really-- OK, 5% is not enough. But this is literally the last thing I want to say. Yeah. But maybe let me-- are there any questions? So basically, the last thing I want to say is that instead of having the minimum, like all-layer margin there, you can have the average all-layer margin. Cool. OK. Any questions? OK. I guess then see you next Monday, oh, Wednesday, in two days. Wait, today's Monday. Right. Yeah, OK. See you. Bye. Thanks. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_6_Margin_theory_and_Rademacher_complexity_for_linear_models.txt | Well, hello, everyone. So I guess-- so in the next-- I guess, in this lecture, what we're going to do is we're going to bound Rademacher complexity by some concrete formula for concrete models. And by concrete models, I really just mean linear models for this lecture. And in the few lectures later, we're going to talk about neural networks. And just as a review to connect to past lectures, we have to prove that generalization error, excess risk or the generalization error is upper bounded by Rademacher complexity. That's what we have done last time in the last few lectures. And in the next few lectures, we're going to talk about how to upper bound Rademacher complexity exactly for concrete models, like linear models of networks. And also, we are going to deal with the classification loss. So there is something to do with classification loss because it's a binary loss. It's not continuous, so we have to deal with it using some special technique. So that's the overview for this lecture. So let's first set up the basic things, its classification. So we're going to deal with binary classification. So I guess as you probably would expect it, you have y which is in minus 1 and 1 and some classifier h, which maps the input space x to real number R. So here the classifier maps-- we think of h as the function that maps the input to the real number, the logit, for example. And when you make the prediction, you take a sign of the output, of the classifier. So you take the sign of h(x). And this gives you the classifier, right? The h outputs positive numbers. You output 1. And otherwise, you get negative 1. And H is the family of h's. That's orientation. And the loss function on each example x, y is equals to the indicator of y is not equal to the two exa-- true label is not equal to the sign of h(x). That's our setup. Right. So I guess the first thing is that I'm going to very briefly mention this finite hypothesis case. That's just a very quick kind of note. Or we have already done finite hypothesis class, right? So it's probably useful to know that you can recover the same bound for finite hypothesis class using this machinery of Rademacher complexity. Right? That's some kind of like a-- probably a reasonable requirement if you think that Rademacher complexity is a powerful tool. So there is indeed such a theorem, which I'm not going to prove today, because the way to prove it actually is more related to something more advanced later. So I'm just going to state the theorem. So saying that if f satisfies that for every f in f, the sum of f Zi squared, this is-- it's less than n squared. This is a condition that is a little kind of not super intuitive. But actually, what really this is saying is that this is implied. This is a weaker version of just assuming f(z) is bounded less than n. So if f(z) is binded by n, then, of course, the average of the square is binded by n squared. Right. So it's just I'm assuming-- I'm stating the state, the weakest version of the theorem for generality. And if you know this, then you have that the empirical Rademacher complexity on this example is z1 up to zn, right? So s is equal to-- s is equal to z1 up to zn, right? So it is bounded by something, by the size of this hypothesis class. And it's bounded by the logarithm of the size of the hypothesis class times something like M squared, the range of this function class n, function class f. And also, you divide by n. You take the square root. So this is-- essentially, basically log f over n, square root log f over n. . And if you apply this to the finite hypothesis class that we have talked about like, for example, if you apply this to a loss function, a binary loss function, you get what we had before. It's kind of-- it's almost exactly the same bounds eventually. And we are not going to prove this. But we are going to prove it in the future lectures. Today, we are not going to prove it because the techniques is more related to something what we are going to use later. But this is not-- this is just saying that we can achieve what we had, but it's more interesting when you apply this Rademacher complexity for continuous function class, right? And we have also talked about what's the limitation of having finite hypothesis class. For example, the limitation is that even if you do this with some kind of discretization with continuous models you're going to lose a parameter p in your bound, right? So if you do this plus some discretization, then likely what you're going to get is something like p over n, where p is the dimensionality of the model, it's number of parameters in the model if you do some discretization. And they wouldn't be super impressive given that we already have done those brute-force discretizations that we had done before. Right? So today what we're going to do is that we are going to have a different way to upper bound Rademacher complexity not using this kind of tools. And the way that we do it is actually more algebraic and analytical as we'll see. So before doing that, we are going to-- so first deal with the loss function, right? So you can look at the loss, right? This is L01 x, y, h. All right, it's this. So the thing is-- the tricky thing is that there is a sign here, right? So if you don't have the sign and h of x is outputting something like binary, where we call that we have done this in one of the previous lecture, so in that case, we assume h(x) is-- so in previous lecture, we have shown that if h(x) is outputting something like binary, right, it outputs plus 1 or minus 1, then you can show that the Rademacher complexity of f, the Rademacher complexity of the loss functions, the losses, is basically on the same order as the Rademacher complexity of the hypothesis class, h right? That's what we did last time. But now, we are doing a slightly different definition of the h, right? So the h is the function that outputs the real number, right? It's the one that before the sign function. So then this kind of like-- this kind of like reduction doesn't work anymore, right? So, of course, you can still apply the same thing to the sign of h, but then you're going to get a Rademacher complexity of the sign of h, which again is also-- it's kind of like you didn't solve the problem. You had the problem in a different way. So we're going to express a deal with this issue first. And that's called-- sometimes I think people call it margin theory. So we're going to introduce a bunch of tools to deal with this sign issue. In some sense, you have to convert the real number to the binary number in some effective way. And then we're going to bound the Rademacher complexity of linear models using analytical tools. So that's the plan. OK. So I guess the kind of the intuition is that the scales, in some sense, like matter, when you do classification implicitly. But even though at the end of the day, your scale doesn't matter. So kind of the-- the kind of the motivating example is the following. For example, suppose you have a classification class, right? I'm using red for positive data, and you have some kind of like-- you kind of have the-- I use circle for the negative data. And if you think about different classifiers-- for example, this classifier and this one, right? So these two classifiers just-- intuitively, I'm not claiming anything rigorously. If you intuitively think about these two classifiers, the pink one probably should generalize worse than the blue one because you only see these eight examples. Maybe if you-- kind of like your new example, test example, maybe it looks like-- maybe it's here. Then the pink one would have a mistake on this test example, right? But the blue one sounds like-- less likely to make mistakes on test example. So intuitively, the blue one seems to have somewhat better generalization just because this kind of like separates the two clusters more clearly and more confidently. So the confidence order-- so in some sense, you can think of this h of x itself as the confidence because this is a real number. In some sense, this is-- the little bit bigger it is, the more confidence you are about this example. And this does matter to some extent. And this is what we are going to say. So like how do you somehow reason about this and make them matter in some sense in your analysis? So here is the more formal approach towards this. So let's first define-- OK, so I guess-- let's first assume-- this is a assumption throughout this lecture. So we assume that we classify all the examples correctly. So assume the training error is 0. So perfect classification for training data. And you can see that this is, in some sense, reasonable, especially given the more of the kind of like success of large network, right? So typically, you can make the training error very small. And this was actually a reasonable assumption even before deep learning came into play. Before deep learning, what people did was that you add more and more features, and your dimensionality of the features becomes higher and higher. And at some point, the dimensionality of the features becomes bigger than the number of examples, and then you can always theta tune data with zero error. So formally what that means is that if you look at a training example, it's always equal to some h oops, this. That's what I mean by training error zero. And under this kind of training error zero hypothesis class, you can define the so-called margin. So this margin, technically, I think, is only defined for-- at least if you don't do any modification of it, you probably should only define for zero error classifier. And this is the so-called unnormalized margin. So the margin of x-- this is really just the y times h theta of x. You multiply x with y just because you want to make it a positive number, right? So if y is positive, right, we want a positive class, you want h(x) to be big. And if y is negative, you want h(x) to be small, right? So in some sense, margin is kind of like a very informal version of confidence. It's not a probability, of course. It's between 0 and the infinity of, right? So this is always nonactive if you create a classifier exactly on the training data. So this is always nonactive when you are correct on this data point when y and h theta x y is equals the sign of h x, theta x, yeah. OK? So this is the definition of the margin of a single example. And then you can define the margin of this data set by the margin of our data set, margin of the classifier on the data set. So this is defined to be-- when you look at the minimum margin over all examples-- so you take the minimum over yi times h theta of Xi. Of course, this margin is a function of the classifier. If you change the classifier, you have different margins, right? In some sense, the blue one-- as I drew there, right? Has a bigger margin, a pink one, because in some sense, this pink one has some example which has very small margin. And the blue one, for all the examples, you have big margins. So you take the minimum. Then over all the examples, you still have relatively big margin. But I guess here I'm defining the unnormalized margin. So if you look at unnormalized margin, it's not exactly the distance from the example to the hyperplane. So you have to normalize it so that it becomes the distance to the hyperplane. But I think we-- I think, in this course, we don't need to actually define the normalized margin per se. So if you normal-- so for linear model-- I guess you probably you have learned this from CS229, right? So for linear model, if you normalize this margin with the norm of the theta, then it's going to be the distance between example to the linear separate to the hyperplane. And the minimum margin would be the minimum distance of all the examples to the hyperplane. And our goal would be something like-- basically, you are going to bound the generalization error or the Rademacher-- I guess we bound the generalization error binded by the Rademacher complexity. That's what we did in the past for the Rademacher complexity and then this guy, some function of the parameter and the parameter norm to the normal theta and some function of the margin. That's what we'll eventually get out of this lecture. And while we need to define these margins, the reason is that this partly come from a technique to deal with the loss function. So we're going to introduce a surrogate loss function that takes in a margin-- that takes the margin into account. I guess we-- at the end of all of this will be all together-- intuitively, the reason we want to do this is that we somehow believe that margin matters for the generalization. Then you probably want to have a bounds that depends on the margin. And you also have a loss function that also depends on the margin. So so far, if you look at the 0-1 loss, it doesn't depend on the margin, right? How large h(x) is doesn't change your 0-1 loss on an example, right? So as long as the sign doesn't change, you don't really care, right? So we want something that kind of depends on the margin. So the loss function is called ramp loss. I think sometimes it's also called just margin loss. So this loss function has a parameter, gamma. And gamma is kind of like the target margin, in some sense, or kind of a reference margin. You can think of it like that. So this is the loss function that takes into a single number t, and it outputs-- maybe let me draw it first. I guess maybe let's write down the technical equation. So if t is greater than gamma-- OK, let me draw it. So this function is a function that looks like this. And here this is gamma, and here this is 1. So when t is larger than gamma, you make it 0 when t is equal to-- when t is less than 0, you make it 1 that corresponds to the flat area on the left hand-side of the origin. And then when you are between 0 and 1, you linearly interpolate. So this is the way to linearly interpolate is 1 minus t over gamma if you are between 0 and gamma. Right. So this is the linear region. And why you are-- why you're interested in this? The reason is that this is, in some sense, a extension of the 0-1 loss. So maybe let me first define notation, so with a bit of use of notation. You can also write l gamma x, y. You look at the margin loss applied on this classifier h that's defined to be l gamma y times h of x. This is the definition. But these two l gamma have different meanings on the left hand-side and the right-hand side as you can see. So this one is the one we just defined. So basically, first of all, before, when you talk about loss functions, right, it takes in two arguments, y and y hat, right? But for classification, the only thing that matters is you take the product of them. So that's why you only care about y times of h times h of x, right? So basically, in this notation, we have noted the ideal loss function, l 0-1 loss function of x and y, which x is equals to some indicator of y times h(x) is y times sign of h(x). I guess that's the same thing, right? So y times h(x) is larger than 0, right? So this is the 0. This is a different way to write 0-1 loss function, right? So if y and h(x) doesn't have the wait, go back-- this is less than 0. Sorry, I have to mark that. I have a typo here. So that I can fix it for the future. OK. So this is the binary classification loss, and this is the so-called ramp loss. And you can see the difference is that the indicator function would just look like this. This is indicator, t is less than 0. And what we do is that we extend-- we make this indicator function more continuous. That's basically what we are doing. So in some-- and you can know that. So from this, you can see that the l gamma y h(x) is always bigger than the indicator of y h(x) is less than 0 just because the function above [INAUDIBLE] is bigger pointwise than the function below, right, which means that if you look at the 0-1 loss of an example, it's always less than the ramp loss, at an example, which means that if you look at the test loss-- right, if you take the expectation over x, y joined from p. All right. So this is the final thing you really care about, the test error, right? So this is the fundamental thing you care about. You can at least upper bound this by the population error under the ramp-- under the population loss under the ramp loss. Right. So by doing this, you change the-- you make the loss bigger, right? And then we're going to bound this. So basically, with this, eventually, what we're going to do is we're going to bound the test error, the test loss under the ramp loss, which is the upper bound on the binary loss. Right. So this is our goal, upper bound this. OK. And how do we upper bound this? So I think it's kind of probably-- at least when I read this, at the first time from a book, it's unclear why you want to do this continuation. It will come just-- it will come in a moment. So one of the reasons is you want to make a Lipschitz so that you can somehow get rid of the loss. But before doing that, we have to-- I guess, let's first clear up the hig-level thing first and then let's look at the low-level detail about how to use the loss. So the high-level plan would just be that you let I hat gamma. This is the empirical loss corresponding to the ramp loss. And you can also define-- I think this is a function of h. You can define the population loss. As I said, is this l gamma x, y h. And then if you use Rademacher complexity, the machinery we have developed you get the population loss minus the empirical loss is bounded by 2 times the empirical Rademacher complexity plus 3 times log 2 delta over n. Right. This is what we did in the previous lecture, right? The generalization bound can be bound. The generalization error can be bounded by the empirical Rademacher complexity, where f is this family of losses defined by the ramp loss. Right. So this is saying that eventually you are going to just need to-- basically, this will be the goal next, right? Because if you have the bound on this Rademacher complexity, you have the bound on the-- we are going to do this more carefully after we have the Rademacher complexity. But roughly speaking, once you have the Rademacher complexity, you have an upper bound on the population ramp loss. And the population ramp loss upper bound the population binary loss. Right. OK. [INAUDIBLE] Where is the sup? [INAUDIBLE] Sure. So yes. Yeah, but without sup, it's also true, I guess. Yeah. So I guess for average, this is true. All right. With high probability for average, this is true. Technically. OK. So now let's talk about the Rademacher complexity. And this review is why we care about the ramp loss. So Rademacher complexity of our f relates to the Rademacher complexity of h in a pretty nice way. And here is the-- here's the lemma that relates them. It's called Talagrand's lemma. So seeing the following-- so suppose you have a function of p-- it's a one-dimensional function. And it's a Lipschitz function, kappa Lipschitz function. I guess we have kind-- of define the Lipschitz function. So this really means that if you have two-- any two numbers, p of x minus p of y is less than kappa x minus y. And here is just absolute value because all everything is one dimensional, OK? And once you have this, then you can look at the composition of this one-dimensional function with any hypothesis class. So this is defined to be-- you compose phi. So basically, you map phi to the composition of p of h of z, right. So this is the mapping, right? This is a-- phi will be the loss function basically. So here is abstract. So you can compose any function of phi with the hypothesis class. And to get this phi, compose with h and-- why I'm changing the color? Anyway, so then you can get what you-- then what you can get is that the composition-- the composed hypothesis class is bounded by kappa times Rs over H. So basically, they're saying that if you compose anything on top of the existing hypothesis class, if what you composed with the p function is Lipschitz, then you just only blow up the effect of kappa by the Lipschitzness. And so with this, you can probably see why we care about relaxing the binary function because the indicator of function is now Lipschitz. But if you use the ramp function, it will be Lipschitz. And that's what we do next. By the way, this theorem is-- this lemma doesn't have a very simple proof. We're not going to prove it in the lecture. It does require some-- it's kind of like something, kind of like pretty-- in my own opinion, it's kind of pretty novel and deep to me, like I-- I used to be able to prove it on-- like I prove it once myself. But I think all the existing proofs I know is kind of like somewhat a little bit mysterious to me. But the high-level-- the intuition probably is reasonable because you have a hypothesis class. You compose it with something that doesn't really introduce additional fluctuation that much, right? So that's why you don't make the hypothesis class much more complicated. But if you look at the-- kind of like the-- if you look at exactly what this formula is saying-- this is saying that you take the sup of h in H. So how do you write the left-hand side is something like this. sigma i p of h of Zi. And we want to show that this is bounded by h Zi. Right. That's the goal. That's what this thing is saying because some would kind of like imagine why this is difficult to prove because you cannot really change the order of expectation with sup. If you do that, you make the inequality to lose. And somehow there's a phi somewhere in the middle of this equation. It's kind of very, very hard to pull it off. Anyway, this is just my personal comment about this dilemma. It sounds pretty deep to me. OK. Anyway, so we're going to take-- we're going to use this. And I think it's probably somewhat-- I mean, obvious how do we use it? We are going to take this phi function to be the ramp loss. Right. So l gamma t. Then because the ramp loss-- I guess let's go back to here. So the ramp loss is the Lipschitz function. The Lipschitz constant depends on gamma because here the Lipschitz constant's 0. Here is completely flat. So how the Lipschitz stays depends on what's the slope here. And the slope there is 1 over gamma. Because this is gamma and this is 1, so the slope here is 1 over gamma. So the Lipschitzness of the ramp loss is-- of phi is equal to 1 over gamma. Right. And if you take-- so if you take H to be-- I guess let's take H prime to be this. You map x, y to y times h(x), where h is in H. H prime is still not exactly the same as h because there's a y multiplied with h. And then you take f to be phi composed with H prime. And then by the Talagrand's lemma, what you have is that the Rademacher complex of f, which is what we care about, is less than 1 over gamma the Lipschitzness times Rs of H prime. OK. So we kind of get rid of the effect of the loss function by using this Talagrand's lemma. And then you can also relate H prime to h much easier now because H prime and h-- what's the difference? The only difference is you have a sample. Rademacher complexity is not very sensitive to sampling. This is just because Rs-- I guess we have done this before. At least interpolating some other proof, right? So the Rademacher complex of H prime is something like this. sigma i yi yi times h xi. That's using the definition. And now you look at this. sigma i yi has the same distribution. i's y i, so sigma i, because anyway you flip it, right. So that's why this is equals to-- you can basically get rid of the yi. You get just this. And then the [INAUDIBLE] constant is the Rademacher complexity of h. OK. So with all of this, what we got is that-- so this is basically combining these two things. What we got is the Rs of F is less than 1 over gamma times Rs of H. And you can see that the interesting thing is-- first of all, the loss is gone. And second, y is also gone, right? So you don't have any y's in the right-hand side anymore. So basically, at the end of-- the only thing that matters now is the h(x). OK. And with this, we can put this-- all of this thing together to get a [INAUDIBLE] bound on a binary test error. So recall that we assume-- the perfect classification will assume that yi times h of xi is bigger than 0 for every i will assume a perfect fitting. Perfect. Good. And then you can take gamma min to be.-- right. So this is the-- so let me see why I'm using-- so this gamma min is the empirical is the minimum margin for this data set, right? So now, you have that, I guess, in technical-- let's see what's-- I actually have a typo here, sorry. So let's just call this gamma. We use gamma to define this, right? So then if you look at L hat gamma h, this is what-- this is going to be-- I think it's going to be 0 because you have l gamma y i h(x) i. And y hi is always bigger than gamma and recall that in this ramp loss, if you are bigger than gamma, then you are 0. So basically, every example has zero loss under the ramp loss. So for the training example is the binary loss and the rent plus are not different because they are both 0. And therefore, you can have the following sequence of inequality. So you first bound to the 0 1 loss of h by the ramp loss. This is because the ramp loss is always better than-- larger than 0-1 loss. And then you say that this is smaller than I hat gamma age plus the Rademacher complexity plus something like, O Rs H over-- let's do it a little bit slowly so the Rademacher of complexity of f-- plus some square log 2 over delta n. And then you use the inequality between f and h. So you get Rs of h over gamma-- plus square root log 2 over delta over n. And then this one is 0 as we claimed because this is the empirical ramp loss for the-- from the training data. So this becomes 0. So you got just the big O of Rs of H over gamma. Right. So gamma right, so, gamma. So there's a caveat with with this inequality. I'm not sure whether any of you have noticed that. But if you have noticed that, maybe hold on. For a second, let's first somewhat interpret. OK, maybe let's just expand this notation. So the problem-- so what's the caveat here? There's actually a mistake in some sense, not a serious one, but there is an issue with this derivation. The reason is that-- what is the definition of gamma? Right. So here the definition of gamma depends on the data. And then you just mash up all the independents, all the-- so if gamma-- when we do all of these things before-- so gamma is a constant. And then you have the gamma first, and then you draw your data points. You have your Rademacher complexity, so and so forth, right? So but here we take the gamma to be something that depends on the data which will break all the Rademacher complexity machinery because in a Rademacher complexity machinery, you cannot let your loss function or your function class depend on data, right? So theory-- the function class F, this cannot be-- this cannot depend on data. Right? So we did want to deal with a uniform convergence, the h hat, the classifier, the final classifier you bound, can't depend on data. That's the benefit of uniform convergence, but the function class f cannot depend on data. So that's the small caveat. But this is not a very big deal. But if you choo-- OK. But if you choose gamma to be something that depends on data, then your function class depends on data. And then you break this way. But I think-- I'm not going to deal with this very formally just because this is not a super-- for mathematical rigor, of course, you can do this, but it's relatively easy to fix. The way to fix it is that you do another union bound on the choice of gamma. So now you choose gamma to be the minimum margin, depending on data. But what you should do is you should also prove this for every gamma, right? So if you can prove this inequality for every gamma, this bunch of inequalites. This is, of course, here. For every gamma of, I guess-- you can prove it until here for every gamma. And then in the last type, you can choose the gamma to be the one that you wanted because you're already done with the Rademacher complexity. So you just plug in whatever gamma you want, right? And the way to do it is actually relatively easy. Roughly speaking, what you do is you say-- you look at the-- gamma is a single number, right? So to do uniform convergence over a single parameter is always relatively easy. And after here, it's even easier because you don't have to care about multiplicative bounds. So suppose you have a bound on what's the largest possible gamma you have. Let's say, you have B. Then what you can do is you can discretize this into multiple buckets, something like this. Maybe you can-- so you can have one bucket is B over 2 to B. Another bucket is B over 4 to 2 B over 2. And you prove this for every point in this discretization. It would discretize-- so basically, within every bucket, you just don't really change much, right? The only difference between two numbers in the same bucket is only a factor of 2. So at most, you lose a factor of 2. So basically, this is saying that you only have to show a bounds for those points between each bucket, right? Those kind of boundary points. And how many points they are? There are only log B points, in some sense. And you can also do a uniform convergence. So all of these points actually get even technical. You can even get log-log B dependency. But anyway, so this is the rough idea about how to do this last step of uniform convergence. Because it's relatively easy if you look at the papers, I think most of the papers don't actually do this step just for simplicity. Of course, the state of theorem in a different way so that the theorem is still correct. So they just don't do this very, very last step to make it simpler. So, yeah. So that's also what I'm going to do. I'm not going to prove with you like a super rigorous theorem. I guess, if you really know how to prove it, what's going to happen is the following. So suppose you really want to have a theorem. The theorem statement will look like this. So it will say that with probability larger than 1 minus delta for every-- so for every gamma between some 0 and gamma max, then you can say that for every h, L gamma h less than L hat gamma h plus some big O of this plus square root with log 1 over delta over n plus square root log gamma max over something like this. I guess maybe gamma max should be larger than 1 so that you don't, yeah.. And as a corollary, you can have L01 h, which had for the hypothesis you care about, right, is less than something like O of Rs of H over r gamma plus square root of log 1 over delta over n. So here this is the empirical gamma. Maybe I should call it gamma mean. I think I somehow have a little bit inconsistent notations here, sorry. So this is min over i y. OK? Can you [INAUDIBLE] for gammas [INAUDIBLE] so you can make [INAUDIBLE] but then so like, you just have a max equal 1 so that last one [INAUDIBLE] OK. I guess-- I think the question is whether why don't take gamma max to be really small, right? So first of all, it's not clear whether you can always prove that the final gamma you have on the empirical data can be really small. That's probably not-- actually, you want the gamma to be big. And you want the empirical data to have bigger margin so that your generalization is smaller, right? So you do want to make somehow the gamma like-- at least that's the interesting regime. This very, very small gamma regime is probably not the most interesting one because your bound would be-- [INAUDIBLE] Your right-hand side would be very big. So actually, if the gamma is really, really small, you probably don't need the third-- oh, I'm sorry. I think I know, sorry, my bad. There is a third term in this as well. Let me fix that first. But suppose your gamma is really, really small, right? So you probably don't even need a third term because the first term is already very, very big. That already kind of governs your generalization bound. So you do care about somewhat a large gamma. But there's still a question about why you want gamma-- what if all the scales are very, very small, right? So I think it's really just that-- I think technically you-- let me see. So does that answer the question? It did. [INAUDIBLE] Yeah. Yeah I think there are some kind of small things-- for example, what if your everything is your super small, right? What if all the numbers are extremely small? I think you can make this bound a little bit tighter in some ways. Yeah. I think there's another question. [INAUDIBLE] This one? Oh, this is a log gamma max. And the same thing here. Yep. OK, so I guess [INAUDIBLE]---- generally, I don't recommend to spend too much time thinking about this small subtlety here. The most important thing is to do the first term, right? So let's try to-- so I guess maybe the interpretation is more important. So the first term, this is Rs of h over gamma, right? Gamma min, where gamma min, this is the empirical margin of the entire data set. And this is saying that if you are very confident about all the training examples, then you're going to have better optimization bound. Your bound will be smaller. Or if all the examples are very, very close to be-- all the training examples are close to zero, right? This is min i, yi, h xi. So if all the H outputs on your training examples are very close to 0, that means your gamma max-- Actually, only one of them in this definition. As long as one of your-- example, chain example, has a very small h value, has a value very close to 0, then your generalization bound would be quite worse. So you want all the examples to be very far away from 0, like very confident in some sense. And, on the other hand, you want your classifier-- you want the denominator, the numerator to be as small as possible. You want your classifier to output-- to be less complex. And also, there's another thing to check here, which is the scale does match. So the scaling actually is the right thing. So for example, we have talked about that Rademacher complexity depends on the scale of your function. Right? So if you say you multiply all your function by a half, then your Rademacher complexity will be reduced by half. And you can see here that this bound makes sense because you cannot cheat by doing that. So if you multiply your function h-- sorry. So suppose you say you have a h prime, which is all the functions divided, let's say, 100 h, Right? Suppose you divide by 100. Then what happens is that the Rademacher complexity of h prime indeed it's divided by 100. But the gamma min will also be divided by 100. So that's why you cannot cheat these bounds by a trivial rescaling of your hypothesis class. And that's also kind of shows that something like this will have to show up here, right? If you don't have this, you only have R's of h, then your bound wouldn't be right. Right? Because your bound wouldn't be variant to scale it. So basically, I'm saying that this is invariant scaling. OK? So this basically concludes with our treatment about the loss. So basically, the take home is that you have-- [INAUDIBLE] what is the margin? And the other is the Rademacher complexity. And now let's bound the Rademacher complexity for linear models. So what I'm going to do is that I'm going to do the linear model today. And next lecture, we'll talk about generally deep learning. And from next lecture, we're going to talk more about nonlinear models, in general. So I'm going to have first have a overview of our deep learning and then come back to this to talk about the Rademacher complexity of nonlinear models. That's the high-level plan. So for linear models, here is a theorem. So suppose you have a hypothesis class h which maps x to w transpose x, where w is now your parameter, unless your parameter w has 2 norm less than B. And also let's assume the data distribution has bounded-- L2 norm has a bound, right? So this is the L2 norm square and expectation is bounded by C square. Suppose you know these two things, then you can bound the Rademacher complexity. The empirical Rademacher complexity is bounded by B over n times square root sum of x i. So I guess this is not immediately. No. The scale is on the right dimension. But I think the average one is easier to interpret. If you look at the average Rademacher complexity, you can bound this by B times C over square root of n. So first of all, you get the square root independency, which is very typical for Rademacher complexity bound. And second, in the numerator, you got B, which is the bound of the L2 norm of the parameter. I also get a C which is basically talking about how large your data is, what's the norm of data points, right? So you should have both of these two things come into play because, again, Rademacher complexity is sensitive to scale. So you shouldn't be-- so you should have all the scaling things there because otherwise, you can cheat, right? So like for example, what if you don't have C here then your bound wouldn't be true because you can scale your x arbitrarily to make the Rademacher complexity arbitrarily big. So you have to have all the scalings all right. So right. So that's the-- the first thing we're going to show up our linear models. We're going to have some other theorems about linear models under other constraints, and then we're going to compare them and also compare them with the previous bound as well. But let's first prove the theorem. And this kind of also demonstrates how do you generally bound Rademacher complexity using this kind of like a somewhat analytical approach. So we start with the empirical one, empirical Rademacher complexity. The definition is that you draw some-- you draw some sigmas. And then you look at the sup of this sum. And here you write w transpose Xi because this is the model output. And you take sup over w and what's the constraint of w? The constraint of w is that L2 norm is less than B. And now, let's do some derivations. So first-- so we basically want to solve the sup first. So to solve this sup, I want to understand what this thing is, right? Though we realize that this is actually a linear function of w. And you can write this as the inner product of w with the sum of sigma i. Xi. This is just because you can pull the linear term in front [INAUDIBLE]. And now, what's the sup of this? This is easy because I guess-- what is the sup of w in a product with some vector that's quite, say v where you take sup over 2 norm of w less than B, right? So basically, we want to find the vector that has maximum correlation with some vector v. And you have some constraint on the norm of v, right? So this is just literally equals to-- I guess there are multiple ways to do this. For example, one way to do it is you can use Cauchy-Schwarz. So you can say, w v is less than the norm of W times norm of v. So this is less than v times norm of v. And actually, this can be attained by searching W. So the answer is that the sup is equal to B times the 2 norm of v And how do you attain this inequality? To attain equality, you just choose w to be in the same direction of v so that your Cauchy-Schwarz, this inequality, is tight, and you got the right number, right? So I think this is probably-- I think this is one of the homework zero question that guys had for homework. OK. And you can apply this thing to here. So basically, you get B times the vector v. The v corresponds to this thing. So you got rid of the sup. That's a big thing for us because the sup is very hard to deal with. And now we have a norm of a random variable. And this random variable is a sum of random variables. And note that here we are talking about empirical random or complexity, so the only randomness come from sigma but not x. But still, this is a random variable. It's a random mixing of this axis. And how do you deal with this? So we are going to use the Cauchy-Schwarz again. So I guess maybe for preparation let's first get rid of the-- move all the B the and-- B and N so, you get the sum of sigma i and xi 2 norm. And you say that this is less than the square root of the expectation of the square. This is just because expectation of B is less than square root expectation B square. So I guess maybe I shouldn't call it B for any random variable x, I say x square. The nice thing about it is if you square it, then I think we have seen this kind of equation not only once. In some other cases, we also seen this-- the nice thing about square is that you can expand it, right? So you can just expand what's inside this expectation. This is equals to sum of sigma i square Xi square plus sum of sigma i sigma j Xi inner product with Xj. This is just an expansion. And then another thing is that this is-- here you have i is not equal to j. And because i is not equals to j, expectation of sigma or sigma j is 0, because they are independent on the variables. This is equals to expectations sigma i, expectation sigma j, which is equal to 0. So this term is gone. So what we have is B over n times expectation, sum of sigma i square. Sigma squared is actually 1 because it's Radamacher variable. So Xi 2 norm squared. And then the Xi 2 norm square and the expectation is over sigma, right? It's always over sigma. So there's no-- it's equivalent to no expectation because Xi is not a function of sigma. So we just have this. And this is our bound. Our bound was this, exactly this. So here it appears that it decays, 1 over n. But actually, in the sum, you also have something about n, right, the sum grows as n grows to infinity. So you balance them. You get actually 1 over square root dependecy, which will be [INAUDIBLE] the average. Right? If you average over x again. We call that-- the average Rademacher complexity is the average of empirical Rademacher complexity over the randomness of the data set. Then you got B over n times expectation over the randomness of s. s is the-- the concatenate is the set of Xi's and you get this square root. So now you get into this exact situation where you have some square root inside the expectation, which is not very convenient. So you raise to the higher power like Cauchy-Schwarz using this thing. So you get B over n times expectation of the square of this. I'm sorry. I should use x superscript i. Right. And now it's B over n square root. And each of the Xi's has the same distribution. And then we also assume that this is equals to-- I think this is our assumption, C, about C, right? We assume this is equal to C squared. So this is C squared. So you got-- C squared when you take square root. So you get B C square root of n. OK, sounds good. Any questions? OK. So next, I'm going to show another-- go ahead. [INAUDIBLE] like Cauchy Schwarz, you could just bound it by [INAUDIBLE] Yes. So this is a great question. The question is, what if you don't use Cauchy-Schwarz? You use triangle inequality. I think I-- that's actually a very good question I should-- Actually, let's try to do it a little bit. So I guess what you mean here is that you-- wait, by the way, where you want to use this? In the second application, like you've already [INAUDIBLE] Like once you already, for example, here, where, here? Like, sorry, yeah, yeah. From here to here, basically, right? Yeah. So that's a very good question. So if you don't do this, if you say, there's no different color to indicate this. So if you say that you do triangle inequality, you're going to bound it by B over n times expectation of the sum of Xi 2 norm. OK? So then let's say you also take expectation over Xi just-- maybe we don't have to do that. So let's say this is v over n times sum of expectation of Xi Xi 2 norm. And let's see what-- and you can see what happens here. So you have n terms here. And each of these term is on some constant scale, I see. And so basically, the sum will be on all of n. And then you cancel it with n, so you get B. So basically, at the end of that, you don't have dependency on n anymore. So that's strictly worse because we do want to have a dependency, something like 1 over square root n. That's something that goes to 0 as n goes to infinity. And the reason on why this is a loose inequality, the reason is because here-- if you look at this, this is a sum of things that can cancel each other because sigma i flipped things, right. And if you do triangle inequality, you are basically assuming all of these factors in the same direction, right? So even with the flip, right, so it's possible that all the xi's are in exactly the same direction. Let's say that's already the case, right? But with the flip, they cancel with each other, right? So one of them is going this direction. The other is in the opposite direction. So you have the cancelation. And that's why this Cauchy-Schwarz is or this Jensen inequality is more kind of like height. And this is exactly the gist here, right, because you have to use the concentration, cancellation between the sigma i's. If you don't use it-- actually, if you don't use it, strongly enough in some sense you wouldn't have a good bound eventually. [INAUDIBLE] Right. Exactly, exactly. Yeah. OK, cool. And the next thing would be another theorem. And this different theorem would still deal with linear models but have different norm measurement of the parameter. And you will see a different bound. And this is one of the reasons, one of the things that I motivated, one of the reason I use to motivate Rademacher complexity because I was saying that you can get more precise dependencies on what norms you want to use, for example, right? So suppose we still have the same H or we have the different H. It's still linear model but the constraint on the parameter now becomes L1 norm. So this is L1. And we assume that now the infinity norm of x is less than C. for all i. For all i. And also, let's specify a dimension. Xi is in Rt. Actually, this is an interesting thing. So before, we didn't even specify the dimension of x in the previous theorem because it doesn't show up in the bound. And it has to be some vector in some dimension of course, right, but it doesn't matter what the dimensionality is. Actually, you can apply this to even infinite dimensional vectors as long as the norm of x is bounded by C. But next thing will depend on dimension. The dimension is D. And then the empirical Rademacher complexity is less than B times C times square root 2 log d over square root of n. Right. And you can see now the 1 norm starts to matter. So it's still kind of-- so let's suppose you ignore log d. So basically, still something like B times C, but it's a different measurement. On the B and C, the definitions are different. Now the B is the 1 norm of W. And C is the infinite norm of x. We'll compare this two theorems after we prove it. Let me see how much time I have. I think I do have time to prove it and then compare. So the proof won't be complete, in the sense that I have to invoke some lemma, which I will prove-- actually, will be proved by you in the homework. But let's do the most of the stuff. So the definition is the same thing. And now it becomes something like-- again, you can view this as some w times some v, where v is this 1 over n times the sum of sigma i Xi. I'm writing here, this is sigma i w transpose x squared. So we are doing the same decomposition. But now, you are taking a sup over 1 norm. And you know that if you take the sup of over 1 norm, constant P V times W, this is also relatively easy to prove. I mean, this is equal to B times the infinite- sorry, it's W times V. W times V. This is going to be equals to infinite norm of v. So that's how we eliminate W. So we just got the infinite number of B, Which I see. Sorry, yeah. So you see what's going on here. I think this is-- Oh-- I see. So, right. So this is equals to this. However, so now we've got a problem. So we have this infinity norm. And so how do we proceed, right? So you can, for example, use triangle inequality, right? So then, again, we don't use the cancelations, right? So if you use triangle inequality you're going to flip-- you swapped the sum with the infinity norm. But you don't have the cancelation. So how do I deal with this? So in some sense, I'm going to-- kind of like the infinity norm is kind of like something different, then you cannot use this analytical tool. So what I'm going to do is I'm going to do a different approach or somewhat different approach. So from here-- so this inequality. So this is a data in some sense. So what you do is that you say-- you look at what this really means. So you can say that this is equals to B over n times the sup of W is in sorry, B such that-- I think I probably got the wrong version of the notes. That's why I was like a little bit of kind of like surprised by the notes because I think-- the newer version shouldn't be like this. But anyway, so the first thing you can do is you can normalize it to be 1, so you get w times v, right? That's w bar times v, times v. That's easy. And then what you can say is that if you think about what maximize this inner product among all the L1 non-bounding vector-- actually, the only thing you care about is-- so the sup is actually literally close to-- if w1, wbar is between plus minus e1 plus minus e2, so and so forth, plus minus ed. This is my claim. And the reason for this claim is just that if you look at sum of W for i Vi, right, suppose i is index for the coordinates. OK. So how does-- so basically, you can say that-- so basically, you know that in this sup W bar V is equals to v infinity norm. You know this. And what you care about is that what are the extreme point? Right, in what case you achieve this equality? And it turns out that the way that you achieve this equality is is that w-- basically, you want to take a w bar i to be 1 for i is the largest entry. So for i such that Vi is the largest, is the max over Vk. j is in d. Right. So I'm not sure whether this is-- this probably requires a little bit of thinking offline. So at least you can verify in this case if you choose wi to be 1 for this case and all the wi bar to be 0 for all the other i's. Then what you get is that you get sum of wi bar Vi is equals to just Vi. And Vi is equals to V infinity norm because Vi is the largest one. I suppose we take the absolute value here. Then this is exactly correct, right? So if you choose this wi, then you-- and all the wi bar 0-- so you've got Vi infinite norm. And that's equals to V-- the Vi absolute value, that's the V infinite norm. Right? And also, if you don't have absolute value on the left-hand side, you can also flip the Wi to be either 1 or minus 1. Does that make sense? Yeah, it's relatively easy, so it probably requires a little bit of thinking offline as well. So but the basic claim is that when you do this kind of like maximization over the simplex, you always get the vertex, right? The extremal point is always the vertex. That's another way to think about it. And the vertex are those natural bases. And then we basically got into-- so what happens is that this is a finite hypothesis class. What does that mean? So basically, you can think of your hypothesis class. H bar is something that x maps to W bar transpose x, where W bar is only inside this family of plus minus e1 up to plus minus e. Right. So you don't have all the linear classifiers anymore. You just have 2D linear classifiers now, right? So basically, this thing is just equals to the-- basically, if you put a B outside, you get this plus minus ei, wbar V. And this is just equals to V times the Rademacher complexity of this hypothesis class H bar. OK? And we have a claim that in the-- and also, we have claim in one of the very early part of the lecture. So we claim that for hypothesis class you can burn it by-- so this is-- let's go back to the lecture, to the notes before. So this is the lemma underneath. So the Rademacher complexity is the log of the hypothesis class size is bounded by the log of hypothesis class size times something like M square, where M square is the largest possible value we can output from this hypothesis class. So let's compute what M square is. The size of f is pretty clear. It's 2d. And what's the-- so this is equal to 2d. But what is the corresponding n, right? So we can say that for every W bar in this plus minus ei, you know that it looked a lot-- I guess you know that W bar Xi this is bound by the L1 norm of wbar times the L inifinity norm of Xi. And this is bounded by 1 times C, where C is the infinity norm bound for Xi. And that means that if you look at these things that we have to verify in the lemma-- Right. So we have to verify that the sum of the squares of the output is less than M square. Right? That's what we have to verify. So then because each of the term is less than this, so we can just verify the sum of squares. This is 1 over n times n C square, which is equal to C square. So basically, the corresponding n will be the C square. n square will be C square so that's why Rs square is less than a square root 2C squared, log of H over n. This is just the square root of 2C square log d over n. And now recall that we have a B here, which we got. So Rs (H) is less than B times RS(H) bar, which equals to-- which is less than B times C times square root 2 log d over square root of n. OK, any questions? So yeah, I think we're about time. So I guess next lecture at the beginning, I would discuss how do you compare, or how do you interpret these two theorems? Right. So these two theorems have their strengths on different cases, depending on what kind of W's you are-- what kind of data you have and what kind of W's you can fit from the data. I will do that in the next lecture. OK, sounds good. I guess that's all for today. See you next Monday. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_1_Overview_supervised_learning_empirical_risk_minimization.txt | OK, so let's get started. So the formulation-- so most of this course will be about supervised learning. So in some part, we're going to talk about unsupervised learning. But I think maybe like 80 of the lectures will be about supervised learning. So this is about supervised learning. OK. So let me just-- so we have some definitions. So input space-- so this is the data that you want to classify or kind of like you want to do regression on. So under the label space, that's called y. And there's a joint probability distribution [PAUSES] p over the space of x times y. And there is a-- let me see how do I-- I guess probably, I would still try to do-- maybe I should do this. And this is better, right? And we're going to have some trending data points. So these are x1, y1 up to xn, yn. Each data point is a pair of input and output. And we will use n for the number of examples forever. So n is reserved for the number of examples for this course. And each of these data points xi, yi is assumed to be joint IID from this distribution p. So p is the distribution we are interested in. And then we have some examples from it. And we have some loss function. And this loss function takes in two labels. And it also is a number that characterizes how different these two labels are. And I think typical convention is that the first one is the predicted label. And this is the true label. Oh, this is the observed label. And you assume that if you have-- the loss is always non-negative. I think in some cases the loss can be negative. But in most of the cases, the loss is non-negative. And now suppose you have a-- you can also have a predictor. Because this is what you are interested in. You want to have-- sometimes it's called model. Sometimes its the hypothesis. We're going to use all of this interchangeably. All of these are used in some different sets of contexts, but they all mean the same thing. It means the function you want to look for to predict your label. So this is a function that's called h. It's a mapping from y to-- x to y. And you can define the loss of the predictor. Example, x, y and a loss will be you first plug in h of x, which is your prediction. And you have y. This is the loss. And then after we divide all of this, you can define the so-called expected or population, risk or loss. This is kind of the interesting thing about machine learning, like everything has like two names at least. I think two is the lower bound. Sometimes you need three. And also, my brain is kind of like-- for different kind of situations, I use different name for this. You prepare for that. Just because when I learn this part of things and those literature, I use that name, and then if you learn something else, then you'll-- those kind of papers use a different name. But so my brain is just like-- these names spreading into different parts of my brain. So I might use inconsistent terminologies a little bit. But all of these are the same. They are the same. Expected just means population. And risk just means loss. But of course, I will try to-- I will try to be consistent as much as possible. And this expected risk or population risk is defined to be the expectation of the loss. And here, the random variables are x and y. And they are drawn from this population distribution p. And that's why it's called population risk. And this is your final goal. Your final goal is basically to minimize. So find H that minimize the population risk. At least, this is the goal for the first at least 15 lectures. Right? So this is the goal for supervised learning. You just want to predict x value as possible. OK? OK, so to achieve this goal, you also have to introduce more concepts, right? So one concept is this so-called hypothesis class-- sometimes, hypothesis family. And you can also call it predictor class, predictor family, model class, model family. So let's call it capital H. And this is a set of functions [PAUSES] from x to y. All right. And you can define the so-called excess risk. Because at the end of it, you're going to search over a set of functions. Right? And maybe the set of functions is very bad. For example, the set of functions only contain one function. So that's why people define this as so-called excess risk, which tries to kind of define your error relative to the power of this hypothesis class or this set of functions. All right. So these excess risk is with respect to capital H, so this is defined to be your population loss or population risk minus the best you can find in this family. OK. So basically, this term is the best model. In H. Which H, sorry? The inf. The inf. Yes. So this is a good question. So inf-- basically, you can if you ask me. I guess we just-- for this course, let's say they are exactly the same. Of course, they are not exactly the same just because sometimes you don't have a unique minimizer, right? So you have a sequence of-- like you can-- maybe I'll have a post to explain the subtle differences between these two. But for almost-- I think for exactly for this entire class, you can just assume inf is the same as minimal. Yup. Cool. And this is a lot of 0, because this is the minimum. And so that there's no way you can get better than the minimum. So that's why it's larger than 0. So in some sense, this is trying to kind of-- think about excess risk as one way to kind of only think within the family of H, right? So now, if you get 0 excess risk, that means that you cannot do anything better within this family. Right? Of course, if you change your family, maybe you can get something else. But at least within this family, there's no way you can do better. OK. So this is the basic language we have been working for this entire course. Any questions so far? In any case, just feel free to interact me at any point. You don't have to wait until I pause in either the Zoom meeting or here. So some quick examples to make it less abstract, so this is-- I assume this is relatively abstract. So one of the type of questions in regression problem where your y, the label set, is real number, so they are continuous labels. And oftentimes, for regression problem you have the so-called square loss. So this is the L y-hat y is equals to maybe something about half absolute y-hat minus y square. For example, you want to predict the temperature, it makes sense to use the square loss. Of course, there are other different type of loss. And another possibility is classification problem. So in this case, the y is a discrete site. You have a set of k labels. Maybe it could be two labels-- cat versus dog. Or it could be multiple labels. And then your loss-- the final loss you care about often is this so-called 0-1 loss. So basically, you say that if you didn't get the right label, then you are close to 1. Otherwise, you are close to 0. This 1 is the indicator. So indicator E is 1 if E happens. An indicator of E is 0 otherwise. So you'll see that when you really do the practical machine learning and algorithm, you are not going to use this loss because of the other issues, but this is the loss you care about eventually. At least, this is one of the losses you could care about eventually. This is the so-called accuracy of the error, right? But when you tune it, you maybe use cross entropy. That's a slightly different question. So OK. So now, this is the setup and the goals. And now, let's talk about one important algorithm, which is called empirical risk minimization. This is the algorithm or type of algorithm that we can analyze for quite some time. So the algorithm is very simple. So I guess this is what you do in practice everyday. So you have some training loss. Sometimes, it's called empirical loss. And sometimes, it's called empirical risk. So this loss, we use our l hat. l hat means it's empirical. Every time we use a hat here, it's kind of like-- it pretty much means empirical. So you have the sum of the average of the loss and all the examples, h i, y i from 1 to n. And then, you do the so-called empirical risk minimization, ERM, empirical risk minimization, where h hat is-- you find the best model in this family. I guess here I'm using argmin. So argmin are is just exactly the same for this course. So you find the best model within the family that minimizes your empirical risk. And you can break tie arbitrarily. We don't care about breaking ties in many cases. And so this is the algorithm. So using this algorithm, you may need to use some other optimizers to find the minimum, right? But this is the abstract way of thinking of the algorithm. You'll find a minimum. And the key question is that, how do we-- why this is a good algorithm? Why this is doing something sensible? And one of the key property of this is that-- one of the reason why this is somewhat meaningful is, I guess as you know already from previous classes, because it's x i, y i, i b from p. So then, if you look at the expectation of the empirical loss over the randomness of the examples-- so if you take expectation in particular of one example, let's say. And let's say the examples are run, then this is equal to the population, right? This is exactly the same as you draw x y from p, h x y. To verify this is just a change of notation. In some sense, this is average. So the expectation of the empirical loss is the same as the-- so basically, it's saying that if you take expectation of the empirical loss, I will have h, which will be an average of all of this, 1 over n times expectation l of h x y. All right, so this will be equal to l of h. And here, the randomness comes from all the x i's and y i's. So this is the typical justification we have for this kind of algorithm because the empirical loss is a good-- is an estimate for the population loss. That's why minimizing the empirical loss probably would lead you to minimize the population loss. So in some sense, at least a good part of this course is to justify more formally why this is the right thing for us to do. Intuitively, it sounds right. But formally, whether this is-- we want to kind of prove that this is actually the right thing. And it's actually not that easy because it does depend on some other things, for example, how many examples you have and how large your hypothesis class h is. It's not that simple. This is just kind of an intuition. So all right, any question so far? And also, we're going to have-- I guess-- I assume that most of you know this. This is just a formal definition. So when you really do this, you have a hypothesis class. When you really do it in computer, you have to have a parameterized family so that you can optimize the parameters. So you can also have a parameterized family. So you call this H, for example, something like h is sub theta, and where theta is in some space of parameters. And maybe let's say theta is in RB or some kind of like-- is the parameter. And then, for example, theta that could be-- so capital theta is the family of parameters. This is the-- sometimes you want to say that you only do it for sparse parameters or only do it for certain kind of parameters. And one of the example of this is that you can take h to be, for example, h theta x, which is equal to theta transpose x. Then this is all the linear models. OK, so this is easy. And then you can also do ERM for parameterized family. So I guess here, this is actually probably the most important cases because in particular you do parameterized family. And now your training loss, let's define these training loss still as l hat theta, as l hat. But with a little abuse of notation, you say that theta is your input of the training loss. Theta is the parameter. Before we said, the training loss is a function of the model, and now it's a function of the parameter because the model and the parameter are just a one to one correspondence, in some sense. Maybe not one to one, but they have a correspondence. So your representation for the model is really through the parameters. So each parameter corresponds to model. And this is just the-- I'm just writing what you're expecting probably. So this is the empirical loss. And here I'm overloading the notation a little bit, and we are going to overload this notation in this course many times. And sometimes you write this thing-- sometimes you write this as alternatively, again with a little abuse of notation, you sometimes write this as this. And just because theta is what-- and x, and y, are what you care about, after you know these three things, you can compute a loss. These are just some notations. Because we are sometimes going to use these notations. Sometimes we use these notations a little bit exchangeably, so it's good to be aware of that. And you can define the so-called ERM solution, which is the argmin of the empirical loss. And where theta, is in this parameter, capital theta. And sometimes you just write theta hat as a shorthand for ERM, sometimes you can write this. But you don't have to remember some of these kind of cases, just we're going to remind you later. So in the goal, as you can expect it, it's really, again, just to show the excess risk of theta hat ERM is small. Because that's the success kind of criterion, right? You want to show that, you find some theta hat, and this theta hat is working and working in the sense that the excess risk is small. And this is basically the goal of the first probably few weeks. And the core in some sense is really to, I guess kind of like a trailer. In some sense, the core idea is to show that l theta is close to l hat theta, right? Because you are minimizing the l hat theta, but you care about l theta. So you have to show these two are similar in some sense, but it's not that easy. Next question [INAUDIBLE]. Sorry this is-- I guess that's me. Sorry, this is me. My bad. Actually I have a typo here, you might notice as well. Thanks. OK. [INAUDIBLE] goal again? So the goal is to show that your algorithm works, right? So this theta hat ERM is doing something, right, it's good. And what does it mean by a model is good? A model is good in it means that, at least in our definition, it really only means that the excess risk is small, right? But if you can make sure that you are kind of close to getting the best model in this family then that means you are doing well. So that's why the goal is to show that excess risk is small for this model. [INAUDIBLE] Eventually you care about the learning algorithm. But to show this, it does depend on what the family of the hypothesis is. But the final, final goal is that you show a learning algorithm using these family of models can work. Do you ever actually evaluate l hat? I assume it requires a sort of distribution of l theta hat. So can you evaluate l? Yeah. Empirically, I guess, yeah. So yes, you can evaluate l pretty well in the sense that you can have a hold out of data. So that's why the validation data is used. Of course, there are some subtleties about-- OK, so how do you evaluate l if you want. So the ideal scenario is that you collect some new data, and they are fresh data. And then you use the empirical estimator for it. The subtlety would be that whether you have seen this data before, right? If you haven't seen this data before, then you are all great. But if you have seen this data, then it becomes tricky. So that's actually exactly what we are doing here because I hat and l, right, so this is intuitively very much correct, but the question is that-- we will talk more about this. The subtlety is that whether we have seen the data before or not. Any other questions? OK, cool. OK, sounds good. And this is the kind of main topic in this course. Although there are going to be more and more subtleties about this, for example, like in the first few weeks, we're going to talk about this. And then other things in this course-- so we're going to talk about how to for example, one thing is how to minimize l hat theta, right? So suppose you know that all of this is great, but you still want to know how do you do this in a computationally efficient way, right? That's something we're going to touch on for a few lectures. And also we're going to talk about additional complications in some sense in deep learning. In some sense, this framework becomes questionable when you do deep learning. Of course, some part of it still survives, actually most of the part survives, but some of the-- if you really go into the low level technical stuff, then some of the technical stuff stops kind of making sense, and there are a lot of additional complications, right? So far everything is still kind of OK, but then once you go one level lower then some of the classical techniques don't apply to deep learning. And also we're going to talk a little bit about enterprise learning, which is somewhat different, but still some of these losses are involved, of course. And the transition, all of this-- like the notation still mostly applies, but with a little bit of differences. OK, so that's the formulation. And now let's move on to asymptotics. Before that, any questions? OK, cool. So what does asymptotical analysis mean, right? So this is a type of analysis where you assume that n goes to infinity. So like n, the number of examples, goes to infinity. And you show a bound like this of the form. Something like excess risk, this is our goal, which is l theta hat minus argmin-- sorry minus mean. This is less than c over n, plus little [INAUDIBLE].. And here this constant c, is a constant, but it's not a universal constant. It's a constant that, of course, that depends on n, but could depend on the problem. For example, dimension, right? So and a little on this, kind of what you learn from calculus, this is a lower level term compared to 1 over n. So this is kind of the general kind of approach. And after we talk about this, I will talk about-- then we're going to move on the so-called non-asymptotic approach, which, I will discuss that after we talk about this. OK, so-- There's a question. [INAUDIBLE] Shall we close door? I don't know. Could you be a little quieter? Yeah, I think that one probably is fine. Anyway, yeah, that's a good question. Yeah, so I think while we care about this bound. So we want a bound that goes to 0 as n goes to infinity. Because you want to say that, if you have more and more examples, you can do better and better. But whether it's 1 over n, or 1 over squared n, or 1 over over n square, that depends on what's the truth, right? So it just turns out that the right bound is 1 over n as we'll see. You cannot get better. You shouldn't get worse. Of course, it still depends on the setting a little bit. But for the setting, we're going to talk about well over a is indeed the right path. Yep, so cool. OK, so now let's get into a little more concrete set up. So we are going to write this theta is our p. This is our family of parameters. And theta hat is-- I'm writing this again, so but just to rewrite it, this is the ERM solution. And let's just for notational convenience, let's define theta star to be the best model in this family, right? But this is the population risk but not empirical risk. Theta star is the best in terms of the population, and our goal is to bound the excess risk, which will be just l theta minus l theta sub. OK, excess risk. So our goal is to show l theta minus l-- sorry, I theta hat minus l theta star is small. OK. And a trivial consequence of this is that l theta star is the mean of l theta. Find this. OK, so here's the theorem that we are going to prove. So typically in this course, I'm going to take this approach that we state the theorem first, and then talk about why we have to prove it, or how do we prove it. So we assume the consistency. By the way, this is like-- as with what I said in the beginning, this part of the lecture is a little bit informal just because I don't want to get into too much trouble. Too many [INAUDIBLE]. So what does the consistency of theta hat mean? So this means that theta hat eventually converts to theta star in probability, as n goes to infinity. If you are not familiar with what convergence in probability means, it doesn't really matter that much. So the reason why you have to have something slightly different is because this is a random variable. Theta hat is a random variable. If it's just some deterministic variable as a function of n, then you can define a convergence in the trivial way. But here theta hat is a random variable. So technically this means convergence in probability, just in case you are interested in this, but it's not that important. So convergence in probability means that if you take a limit, as n goes to infinity, we look at the probability of theta hat minus theta star is larger than some epsilon. Its probability will go to 0 as n goes to infinity, for any epsilon that's larger than 0. But it's not very important for this course. For this course, it's perfectly fine that you just understand this intuitively. Theta hat is a random variable because [INAUDIBLE] depends on the probability distribution-- Yeah. Exactly. Is it except for the fact that the math from the set of theta hat is like measurable, correct? Sorry, can you say that again? Something like a map from the samples of theta hat is measurable. Yeah, we have all of those. Yeah. When you're writing in landscape, the stuff was a bit bigger on the board. Would it be possible to- Yes, but I think the issue is that it's going to be smaller vertically. So I felt that this is better I think because there is more things shown on the board. Could you maybe write a bit bigger? Bigger? Yes, sure. That's fine. Maybe I should also repeat the question for the Zoom meeting as well, but yeah, next time. OK, cool. And also this in formulae-- OK, sounds good. So and then, let's see. So we also assume that the Hessian of the loss h theta star is full rank. And what does the Hessian mean? The Hessian is-- probably most of you have seen Hessian if you have taken CS29, but Hessian is just a second of derivative, but you organize it in the matrix. So the Hessian of a function f is a matrix. And its matrix, all the answers are the partial derivatives of this. All right, this is a matrix of dimension p by p if f is a function that maps r p to r. OK, so and there's also some other regularity conditions which I'm not going to even state because it's probably not super important for this course. And for example, this involves something like the gradient is finite, something like that. And under these assumptions, then you have a few things. You can know a lot of things about the theta hat. So first thing you know is that formula you have to write this. So with square root n times theta hat minus theta star. This is bounded, this op of 1. So I'm going to define op of 1 in a moment. But this is really roughly speaking just the saying that theta hat minus theta star is roughly on the order of like 1 over square root of 2. Something like this. So if you multiply theta hat minus theta star by square root n, then it becomes on an order of a constant. So what is this op of 1 here? Again, this is not super important for the course. You can, if you don't think of it as constant, you can just think of as o of 1 as in most of the standard CS courses. But the detail here is that bounded in probability, xn is a random variable and indexed by n is op of 1. This means that for every epsilon that is not a 0, there exists a bound n, such that if you look at the probability that xn is bigger than the bound, this would go to 0. I guess for sup, you can think of it as max. If you are not familiar with the sup, this is going to be very small eventually as n goes to infinity. But if you are not familiar with all of these jargons, just think of this as o of 1. [INAUDIBLE] minimizer is unique? Yes. Actually, the minimizer is unique is already assumed when I defined this, in some sense. So again, I'm pretty informal here, but I'm already assuming that the minimizer is unique. But indeed, if the minimizer is unique, I think you need Hessian to be full rank. But I think Hessian is full rank doesn't mean the minimizer is unique. OK, sounds good. So any other questions? But the most important thing here is that you somehow know how far theta hat is close to theta star. And how far it is, it's kind of something like 1 over square root n and as n goes to infinity. And then also you know that how different l theta hat, the population risk of the minimizer, theta hat, is close to the population risk of the best model, theta star. And how different they are? So they are different, in this sense, where if you multiply the difference by n, then you get a constant, which pretty much is saying that l theta hat minus l theta star is something like 1 over n. OK, so and actually more on this. So you also know that what's the distribution of this theta hat minus theta star. So theta hat minus theta star is a vector, right? And if it's multiplied by square root n, it's going to be on the order of constant. But also you know what the distribution of this random variable is. As n goes to infinity, I think this distribution is, in distribution, is converging to a Gaussian distribution, which means 0 and some covariance. And this covariance is complicated, let me write it down. Something like this. By the way, all of these are in the lecture notes. So you don't necessarily have to take notes if you don't want it. Anyway, right, so how to interpret this covariance, I think that's-- it's not interpretable for the moment. But the point is that it's a Gaussian distribution, and after scaling by square root n. So if you don't scale by square root n, it's going to be smaller and smaller. But if you scale by square root n, then it's going to be a Gaussian distribution with a fixed covariance. And it means 0, so theta hat is centered around theta star, so that's very good news. And also, the first thing you also know something about how different was the distribution for excess risk. We have talked about the excess risk as a random variable is on the order of 1 over square root, 1 over n here, right? So this is what we have talked about. But then also you know exactly what's the distribution. So the distribution is, it's actually a complicated stage, but let me do it. So first you define a random variable. Let's call it a Gaussian random variable with some covariance. The exact details here also don't matter that much because it comes from the derivation. You derived it, and you found that this is exactly the right thing. So the point here is that if you define this random variable, then you can know that excess risk, which is l theta hat minus l theta star. If you multiply that by 1 over n, then in distribution, it converges to this random variable, the norm of this random variable s. s is the Gaussian distribution. And you also know what's the expectation of this. If you really want, which is something on the order of 1 over 2n. And you also know, was the constant. OK, so all of these formulas don't necessarily matter that much because you derive it, you got this, right? But the point is that you almost know everything. So you know everything about l theta hat. You know the distribution of theta hat. You know l theta hat. You know the distribution of l theta hat. It's very powerful. And you can make all of these formal if you want. Any questions so far? The first assumption [INAUDIBLE] property of [INAUDIBLE].. Is that a property of what? [INAUDIBLE] Yeah, so my understanding is the question is that, what the consistency assumption was? Is that a property of something, right? So what property like, is this the property of the problem? Yes, that's correct. So it's a property of the problem, meaning that it's a property of the model parameterization. Yeah. So this might answer the question. I have no idea how we would do this equation from a Gaussian. I'm not following [INAUDIBLE]. Sorry, you are not following why this is true? So what are some other materials that can be-. I guess maybe you can talk about this offline, it's OK. Yeah, just come to me after the course. Yeah. But you're not expected to, just one thing for anybody, you are not expected to see why these are true, right? These are just some statements saying that, OK, this can be done mathematically. I will show you something about how to draft this, at least somewhat informally. And the proof of actual techniques is pretty simple. The calculation is a little bit tricky. It's a little bit complicated. You have to work through it. But the fundamental idea is very simple. Yeah, so far I'm only stating that these are all correct. You can prove all of this. That's the only thing I'm saying so far. Are these [INAUDIBLE] very strong, or are they easily verified by any problem? Yeah, so for example, the consistency assumption, right? Yes, so that's a very good question, right? So far we see this very strong statement, everything about theta hat, right? So some things probably should go wrong because otherwise we would probably solve all the problems. There's no non-linear assumption. It works for nonlinearity, right? So I think the problem is that the consistency assumption is a little bit tricky if you don't have n goes to infinity. You really have to have n to be really, really big. Then you can somewhat have the consistency. And I think basically, the whole thing-- the whole problem of this-- the limitation of this theorem is that you need to let n goes to infinity, and you really need very, very big n to potentially see this effect. So we're going to discuss this a little bit after we move on to the non-asymptotics. But yeah, that's a trailer. Yeah. Right, so when n goes to infinity, you have super powerful tools, in some sense. But still these are actually reasonable characterizations for minor cases. So it's not like they are completely off the reality. I guess they are not necessarily that applicable to the modern practice just because in these days we don't have n goes to infinity, right? You have a million data points in your ImageNet but your parameters are like 10 million. So n is not going to infinity as you fix the parameter. So that's going to be the next half of the lecture to some extent. [INAUDIBLE] one or two or-- [INAUDIBLE] Yeah. One and two are consequences of three and four, yeah. And actually when we really prove it, if we do a very formal proof, you're going to prove three and four first, and then do one and two, yeah. OK, I think I have 15 minutes, right? Yeah, 15 minutes. Yeah, OK, so what I'm going to do in the next 15 minutes is to show kind of an informal proof for one and two. And next time I'm going to do a little more formal proof of three and four, and then we're going to get done with this asymptotics. And then we'll move on to the more non-asymptotic stuff. So this is actually the proof, right? So the key of the proof is two things. One of the things is that you're going to do tail expansion around to the star. And second thing is that you want to somehow use the fact that I hat is close to l, and nabla l hat is close to nabla l. Nabla l hat is the gradient, the empirical gradient, and nabla l is the population gradient. And this is by law of large numbers. OK, I'll elaborate on this. But the most important thing is really the tail expansion, right? So once you can work in that neighborhood of something then everything becomes somewhat easy, OK? So now let's talk about how to really do it. So when you do tail expansion, so the starting point is the following. So you care about theta hat, right? And what you know about theta hat is that 0 is equal to nabla hat l-- 0 is equals to-- the gradient of the empirical loss at theta hat is equals to 0. This is because theta hat is the minimizer, right? And minimizer then, the stationary condition tells you that the gradient is 0. But you want to relate this to l because everything is easier when you do it with l, because I is the population. First relate this to theta star. So you want to relate everything-- basically, the whole idea is that you want to relate theta hat to theta star and l hat to l. So the first thing is that we try to relate this to theta star. So you can write this as, theta expands around theta star. So theta star is a reference point. And then this first-order term, this is a zeroth-order term, and the first-order term will be the Hessian of the empirical loss times theta hat minus theta star. So this is the higher-order-- this is the tail expansion for multi-dimensional function, but it's exactly the same as one dimensional case. It's just that you have to deal with some of the matrices. So maybe just a small remark here. So what I'm doing here is that I'm expanding something like gradient of g, g plus epsilon, abstractly speaking. I'm going to do a lot of these abstractions for small things, right? So suppose you care about this, and epsilon is a small thing, and z is your reference point, you can show that the tail expansion for this is really something like nabla c plus nabla square root g z times epsilon. And this is a matrix. And this is the vector. And how do you verify this? You can do this for each dimension individually, and you get this equation. This is intuitive as well, right, because the Hessian is the gradient of the gradient. So this is the first-order tail expansion, OK? Any questions? OK, so now after I do the tail expansion, then you know that, this is-- the left hand side is 0. And then you can rearrange, right? So put this on the left hand side. So what you get is that nabla l theta star, theta hat minus theta star. It is roughly equal to minus-- sorry, this is equal to minus [INAUDIBLE] l hat theta star, plus some higher-order terms And then, now you have theta hat minus theta star. You can take the inverse of the Hessian. So you have theta hat minus theta star is equal to the inverse of the Hessian times the empirical gradient at theta star plus higher order terms. Total terms. OK? [INAUDIBLE] Sorry, that's my bad. It's still l hat so far. Thanks. OK, cool. OK, so that's exactly the right point. So now I need to change all the l hats to l. And what I know-- so I know that basically, I want to kind of change this to l. I want to change this to l hat, to nabla l as well. And also I need to consider the differences between them. So how do I do that? So at least I know a few things, right? I know that expectation of l hat theta star is equal to l theta star. I know the expectation of nabla l hat theta star is also equal to nabla l theta star. So you assume enough regularity conditions so that you can switch to the gradient with the expectation and you also have something like this. And this is equal to 0 because theta star is the minimizer of the l. So that's why this is 0. And this is a p by p matrix, which is full rank, as we assumed. And basically, this is saying-- and also you can, because this, this is average of n IID terms, right? What is the-- because this is 1 over n times sum of nabla square, l x i y i theta. So it's a sum of IID terms. Then you can use law of large numbers to say that this n converges to this. And similarly, you also know that-- I'm sorry, what I'm doing here. My bad. Nabla l l, this is converging to this. OK, so if you want to just get the-- Moreover, by law of large numbers, you can also get something more accurate about this convergence. So here you are only showing that it's converging, but also you can know that how much different they are. You know that if you look at the difference between this minus this, this should be on the order of one. Or more accurately, this will be a Gaussian distribution, which means 0 and covariance nabla l i theta star. Nabla x y theta star. I guess this is because-- I'm using central limit theorem here, maybe I should first review the central limit theorem. So when you have central limit theorem, you know that suppose x hat is equal to 1 over n times sum of xi. And xi or IID, from some distribution, d. And xi, let's say is in d dimensional. Then let's say sigma is the covariance of xi. Then you know that as n goes to infinity, a x hat, convert this in probability to the expectation of x, all right? That's the law of large numbers. It's called law of large numbers. And then the more accurate thing is that you can look at the difference between x hat and expectation for x. And you know that if you scale a difference by square root of n, then this converts it to a Gaussian distribution. First of all, it's on the order of a constant. And secondly, the distribution is mean 0 and covariance x. And in some sense this is saying that x hat intuitively-- or informally, this is saying that x hat minus e of x is on the order of 1 over square root 2. OK, so this is central limit theorem. And what we are doing here is that-- in this equation, what we are doing here is basically applying the central limit theorem where you apply x i, it corresponds to the gradient of l at xi by i. Yeah, this is the gradient of the loss at example r. OK, so basically we have done some of these preparations so we know how different nabla l hat is from l, and also we know that, the highest n converges. And now we can come back to this important equation here. And we are ready to get something real, so let me-- I mean rewrite that. So theta hat minus theta star this was nabla square I hat theta star inverse times nabla I hat theta nabla l theta star, is that right? No. Copy, this is not. Nabla l theta star hat plus higher-order terms, so this one is close to nabla square l theta star inverse. That's the first thing we know. And also we know that this one is roughly speaking nabla l theta star plus some [INAUDIBLE],, right? So suppose you do most of this, then you get something like this is roughly equal to 0. So then you get 1 over square root. Maybe I'll ask the question first because this takes a little bit of time. [INAUDIBLE] Can you say that again? Is there a difference between x hat and x when you're using the central limit theorem? x hat and x? Oh, sorry, my bad. Wait, what? Oh, I guess so. Yes, so I'm thinking x is also drawn from b. So maybe I should either use xi, or let's say x is a generic variable that is drawn from the same distribution d. But the expectation of x is the same as expectation of xi, that's right. Are we taking a bias term here? Here? Right. Yes, I'm using that. OK, so maybe I'll just do this a little bit more carefully. So I'm basically trying to replace the l hat with l, right? So the first thing is that the gradient-- using this equation, maybe x squared 1. So this is roughly equal to 1 over square root n, plus nabla l theta star, which is 0, so this is roughly equal to 1 over square root 2. So if you don't care too much about the vector versus real number of the distribution then you get 1 over square root of 2. And this one is kind of close to a constant, inverse converges to-- which is a constant. So that these two things together will give that-- maybe I should use the yellow color to continue-- minus something like one over square root n, on this other 1 over square root of 2. So something like maybe-- basically you get nabla square l theta star inverse times 1 over square root of 2. And this is on the order of 1 over square root of 2. So that's how you got that theta hat minus theta star, it's on the order of 1 over square root of 2. Of course, just to clarify, this is not exactly formal because I'm ignoring a lot of things. For example, this is a vector-- this 1 over square root n thing is really a vector, it's not a scalar, but it's on that order. So that's how you get that theta hat minus theta star is on the order of 1 over square root of 2. And also, heuristically if you really care about I theta minus l theta star, then you can-- this is the excess risk. You can also do the tail expansion. You say that this is-- it will expand along theta star. You get that this is l theta star times theta minus theta star. Sorry, this is maybe-- why do I have so many papers in my notes? Sorry, my bad. So this is theta hat. Here the interesting thing is that if you do a first-order tail expansion, you're going to get 0. So you'll have to do second-order tail expansion. So we're going to get theta hat minus theta star times Hessian plus higher-order terms. OK, so the reason why I need to do the second-order tail expansion is because the first-order expansion is 0, because this is 0. Theta star is the minimizer of l, right? So that's why we have to look at the second-order tail expansion. And if you want to roughly see how large the second-order tail expansion is, you can see that each of these terms-- this is 1 over square root n. This is 1 over square root n. So basically, the second term will be something like 1 over n plus higher-order terms. OK, so this is some kind heuristical proof to show why theta hat minus theta star is in the order one over square root n, and in terms of the loss, is on the order of 1 over n. Any questions so far? So the consistency is needed [INAUDIBLE]?? So consistency is needed almost every step. [INAUDIBLE] I'm using the central limit theorem only on random variable, not the function of it. Because I'm not sure whether that's-- Oh, by the way, I forgot to repeat the question, but anyway, I'll remember that next time. So the question was that whether the central limit theorem is applied to the random variable itself. I think so because xi, this xi, right, so that corresponds to the gradient. So gradient of l at that example i is my random variable. So that's how I got-- and then the sum of xi, this corresponds to the empirical gradient, right? And the expectation corresponds to the population gradient. [INAUDIBLE] wouldn't you need some [INAUDIBLE]?? Yeah. You need a lot of different regularity conditions to make all of this work because for example, also there's implicit stuff that I didn't go through which is the inverse, right? I only show that the Hessian converges to for example-- where I did that, so I only showed that the empirical Hessian converges to a population Hessian. You also need to show that the inverse of the empirical Hessian converges to the inverse of the population Hessian. So that's another thing you want to formally deal with. So yeah, every time I give this-- I've taught this two or three times. And every time there are a lot of questions about this first lecture. I still haven't figured out a better way to teach it, but I think maybe the thing is just that I really want to convey this-- convey this idea. The idea is that you can do tail expansion. And you can pretty much do a lot of heuristical stuff and all of them can be made formal. And how to exactly make it formal, it's a little bit tricky as-- these are all great questions, right? All the questions are welcome, but just to set up expectations, this is not meaning to have a very formal derivation here. OK, so I think that's all for today. So next time we are going to make this a little bit formal maybe for 15 minutes, and then we can move on to other things. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_8_Refined_generalization_bounds_for_neural_nets_Kernel_methods.txt | OK, so let's get started. So I think the last time what we were left was on-- I think we covered the weaker generalization bond. And then, today we are going to provide a stronger generalization bound for the neural network. Let me just double check whether I-- sorry. Somehow I got confused where I'm left. OK, cool, cool. Yeah. Ah, yeah. Yeah. So last time what we did was that we had this generalization bound of the form that you have something like something in the square root of n in the denominator. And today we are going to remove that square root of n, not exactly by just improving the bound. We also have to somewhat change the hypothesis class. So that's the first part of the lecture. And then we're going to talk about-- today we're going to talk about stronger. So first we have a stronger version, and then we talk about some connections to kernel method. And then we will talk about even stronger bound for multiple layer networks. And that requires some preparation with some techniques. And we'll talk about those techniques if we have time today. Otherwise, we'll talk about those next week. OK, so just to briefly review the setup. So the setup was that we have some theta, which consists of two layers. And the first layer is, I think this is called second layer, which is the vector w. And the first layer is a matrix that maps dimension d to dimension n. And our model is something like w transpose phi of Ux. And the phi is element-wise ReLU. And so, last time what we had was that we have this generalization bound of the form that this Rademacher complexity of H is bounded by something like times 2 times Bw, Bu times C, square root of n over n, where H is defined to be something like where you restricted 2 norm of w to be Bw. And you restricted the max 2 norm of Ui to be Bu. And that's the hypothesis class. And in some sense, if you, I guess, we discussed this a little bit in the class. So and also I think somebody asked this question. In some sense, there is a scaling invariance. Because you can-- so alpha times w over alpha U would be the same model as w times U, right? So just because you can scale the first layer by alpha and you downscale the second layer by 1 over alpha if alpha is bigger than 0. So that means that you can also change this bound a little bit and rewrite it as something like, basically you can say, roughly speaking the generalization error, it is something like bounded by O of square root n over n. Sorry, this is square root n, over square root n times the norm of w times the max i-- the max of the 2 norm of U. This is kind of the intuitive way to think about this. So today we're going to have a stronger bound that doesn't have the square root of n here. But we will have some slightly different terms here in terms of how do you measure the complexity of w and the complexity of U. OK? So here is a refined bound. Let me define that. Let me state the theorem first. So the theorem is that we define this complex measure that's called C of theta. This complex measure is defined to be the sum of the absolute value, wj, times the 2 norm of uj. And you take a sum over j. And correspondingly, given this complex measure, you can define the corresponding hypothesis class, which is the family of functions with bounded complexity, something like bounded by B. And also, we assume that the norm of xi is less than C for every i. Know that here actually we have a strong assumption of data. Because before we assumed the average of the norm is less than C, or the average of the squared norm is less than C squared. Now we assume each data point is less than C. This is just the technicality in some sense. And with all of this, then we can prove that the Rademacher complexity of H is bounded by 2 times B times C over square root of n. OK. So maybe let me first start with some interpretation of this theorem and see why this is an interesting one to prove, and then I'll write the proof. So a few remarks. The first one is that why this is better than before, right? So I'm claiming that this is strictly better than before, at least in the following sense. So if you really, I guess, so the way that I compare them is the following. So before what we had was that this generalization is something like square root of n over square root of n times this complexity is something like 2 norm of w times this one, as I already said. That's kind of the intuitive way of thinking about it. If you assume the C is a constant. C is just something about the data, which then will change as you change the hypothesis class. It's really something like a constant. And now, you can basically think of this new bound as peak O of 1 over square root of n times B. What is the capital B? The capital B is basically sum of wj times u2 2 not. So basically, the way I'm comparing them is that I'm comparing these two quantities. And the claim that the second quantity is strictly smaller than the first quality. And the reason is just that if you do some simple inequality, you can see that this is less than-- so first Cauchy-Schwarz you say this is less than sum of wj squared 1/2, and sum of uj 2 norm square to the power 1/2. And then the first term becomes 2 norm of w. And the second term is you can bond each of them by the max. So you get n times max j, Uj 2 norm square 1/2. So what you got is square root n times w 2 norm times max j Uj 2 norm. So this is, in this sense, this is a strictly better bound. And there could be the same if your wj and Uj makes all of these inequalities exactly equal. But in other cases, it won't. And in some sense, you can see one of the intuition here is that this new complex measure-- so another thing is that this new complex measure-- C theta captures the scaling environments better. What do I mean by that? So what I mean is that-- so what I mentioned, I mentioned the following scaling environments. So if you have w and U, this is equivalent to alpha dot w over alpha U. This is because the ReLU is scaling environments, and you can do this. But actually, you have a lot of more scaling environments. You can scale actually each pair of neurons in this way. So what you really have is that this is actually equivalent to-- you can scale each wi to something like wj. You scale each of these-- the j's coordinate by alpha j. And you scale correspondingly of 1 over alpha j times Uj. And you do this for different scalar, for every wj and Uj. And this is still the same, right? But just because the sum of wj ReLU Uj transpose x is the same as alpha j, wj phi of 1 over alpha j Uj x, for any Uj-- for any scaling that is positive, right? So and you can see that if you consider these kind of environments, still this complex measure is the same, right? So the complexity measure is, it's really the invariant to the scaling here. Because if you change wj and Uj accordingly, you don't change the complexity. Which to some extent seems to be a good thing to have, right? So but before the complex dimension doesn't have this property, so if you look at this complex dimension. If you scale each of the wj by a different scalar and you scale each of the unit by different scalar, you wouldn't-- this number would change. [INAUDIBLE] Sure. [INAUDIBLE] two normal [INAUDIBLE].. Right. [INAUDIBLE] Right. So you are saying that this? Yes. So yes. So this one, you do make a stronger assumption. [INAUDIBLE] Sorry? Can you say again? [INAUDIBLE] By C, yes. [INAUDIBLE] Sorry, what was the question? Maybe I didn't answer. I was thinking you write that norm [INAUDIBLE] as ex or [INAUDIBLE]? So but I think, I'm guessing what you are saying is that before the condition was something like-- [INAUDIBLE] I think it's 1 over n times sum of xi 2 norm squared square root is less than C. That was in the previous theorem, I think. Or something like maybe-- that was, or in the previous theorem we did like this. Yeah. It's less than C squared. Right. So indeed, this is-- so the new condition is stronger than the old one, because this one implies the old one. Correct. So yeah. So I'm assuming that suppose you say this is not a problem, you just live with the stronger assumption. Then outer bound is strictly better. In some sense, this assumption x actually is a little bit less important to some extent. Because, for example, if any way your data satisfies the stronger assumption, then it's less important. So yeah. But you are right that the data assumption is a little bit different. But I don't think it matters that much. So I guess [INAUDIBLE]? Right. That's true. That is definitely true. Or you can choose the right C. So but I guess, I think the question was more about comparing the two theorems. If you normalize here, maybe you should normalize there. So what's the fair comparison? Cool. So this is one thing about this complex measure. And sometimes, this complex matter is a little bit more environment to-- at least of the trivial environments in the neural network. So and also, the bond is better. And also, another thing that we have about-- nice thing about this is that-- about this theorem, is that if you have n goes to infinity, at least you get a stronger or equivalent theorem. So the theorem its stronger. So what do I mean by that? So let me explain this. So suppose you look at a dependency on m, right? So this whole theorem depends on m implicitly somewhere. I didn't specify that. But now let's make it more explicit. Let's say, Hm is this complexity. And the same thing so where you have m neurons. And also C theta is less than B. All right. So for every m our theorem applies. So but now I'm just making a dependency on m a little more explicit. And you know that Hm is a subset of Hm plus 1. In what sense? In the sense that if you have a function that is in Hm, you can always add a fake neuron, or 0, dummy neuron, to make it Hm plus 1. Just so any f theta in Hm, you can add a dummy neuron. So meaning that you make w plus 1, 0, and the U m plus 1, 0. And then you can extend its function so that it becomes in Hm plus 1. So Hm plus 1 is always a strong-- it's a bigger family of functions than Hm. So that's why you have a-- but the bond will depend on m. You have the same Rademacher complexity for every m. So in some sense, you're bond would be stronger for bigger m. So the strongest theorem would be you just applied for H infinity. So and that's actually, in some sense, the fundamental reason why later you will see that you're going to have a generalization bound, at least the generalization bound that is decreasing as m goes to infinity. So and that's another nice property of this complexity measure. And also, another small remark is that there is something called path-norm. If you don't-- haven't heard of it, it probably doesn't matter. This is a complex measure that people proposed. And people evaluate that-- people found that this is correlated with the real generalization bound empirically. And this is very closely related to the definition of C theta here. So in some sense, what you see, that the path-norm is trying to say that you look at all the path from the input to the output. And you look at the total norms of all the paths. And in some sense, this is kind of like that. It's not exactly the same depending on which version of the path-norm. But the way you think about this is that you look at the input x. And so, this is wj. And this thing is Uj. So in some sense, you look at it-- so every path matters. So that's why you look at wj times Uj first, and then you take the sum. Instead of that you look at each layer first and then you multiply. Yeah. If you haven't heard of the path-norm, what I said probably wouldn't make that much sense. But if you have heard of it, probably you can see the connection there. This is not super important. This is just something people have empirically studied. All right, so we'll talk about more implications of the theorem later. But before that, let me prove it. Any questions so far? So how do we prove this? So you can see that one of the main point in the proof is that you want to change the scaling in the right way, because you want to capture the scaling environments. You don't want to peel off. So before what we did was that we tried to remove the w first. And then we removed the U. You have a sup over w and U, and somehow remove each of them sequentially. And now, the thing is that you still do the same thing. But you want to remove them sequentially as well. But you want to first rescale things first and then remove them so that you can eventually get the right scaling environment. I'm not sure whether this makes sense. You will see more clearly in the proof. So first of all, let's define this vector U. Let's define U bar to be the normalized version for U. So and then, let's start with the derivation. So what we have is that the Rademacher complexity is something like this. I put my 1 over in front just to make it easier. So you have-- this is the definition. And I guess, we do the usual thing. The first steps. The first two steps I'm just plugging in a definition-- xi. And now, we want to first rescale w and U before we take the sup. So what we do is that we read this as wj, Uj 2 norm. So and then we insert the phi, we use Uj bar transpose xi, right? So in some sense, you put a norm of Uj outside of the phi. The norm of Uj is a positive number. So you can put it out outside of the phi. And sorry, I have a little bit trouble reading this. But I think I can remember what-- oh. OK. There's a page segment so that I couldn't read what my notes were. Anyway, so you rearrange this a little bit. So in some sense, we treat this wj times Uj 2 norm as our old wj. And we want to kind of remove that first. That's kind of-- and also, you can see that this one is something that shows up in the complexity measure. The complexity measure is basically the sum of this is less than B, right? The complex measure is really just the sum of wj, Uj 2 norm, right? So you have sup over theta. And you, I guess we rewrite this. You change the order of the summation so that it's clearer that this times sum of i from 1 to n, Uj, phi of Uj bar transpose xi. And here, I guess let's specify what the concern of theta is. The concern of theta is that C theta is less than B. Which means that this wj Uj 2 norm is less than B, right? So the constraint is really just saying that the sum of wj Uj 2 is less than B. And now, you can see that the sum of these quantities is less than B. But we care about the weighted sub. So we weight each of the quantities by something. And then you take the sub. Where's the sigma that dropped out of that last one? What is it? Oh, sorry. Yeah. My bad. There's a sigma here. Sorry. This is the bad-- this is the problem when you draft things on the fly. Just this particular line, I couldn't read it from my notes. So I'm improvising. OK. Thanks. So and then, let's-- OK. So we know that the sum of wj Uj 2 is less than B. So that means that you can use an inequality here. So you say that you, so you, I guess, maybe let me just have a-- so this is basically you are applying this. ai bi is less than-- you know the sub in ai times the max of bi. This is what we applied. Actually, I probably should use j just to be more consistent with-- so this is j from 1 to n, and this is j from 1 to n, aj, bj. And aj corresponds to wj Uj 2 norm. And bj corresponds to this quality, right? It's abstractly what I'm doing. So if you live in this, then you get basically the sum of aj, the sum of wj Uj 2 norm, j from 1 to n, times the max over j. Right. So in some sense, this is how the inequality writes inner the product a and b is less than the 1 norm of a and the infinity norm of b. And then, this quantity, now we got-- this separate quantity. This is less than B, right? So then, this is less than 1 over n times sigma times b times sup over theta max j, sum of sigma i phi Uj bar transpose xi. And now, this-- if you carefully compare this with what we had before, this should look somewhat similar-- familiar. Because in some sense, we achieved almost the same thing as we have done before. We removed the influence of w. And we only have something about U. And here, what you have about U doesn't have the scale anymore. You only have Uj bar. So basically, now what you can do is that you can say that-- so from here it's basically the same thing as the previous proof. Let me try to repeat a few steps. So I guess one thing you can do is that you realize that this max over j is not doing really much. So what you can do is that you can replace this by max over U bar, where the norm of U bar is 1, and some sigma i phi U bar transpose xi. So that's one thing we can do. [INAUDIBLE] Sure. That's good point. So I should have absolute value. So I think I should have it here. And I should have it here. And I still should have it here. Thanks. Yeah. Thanks for catching all of this. So and then, this I guess, you probably also remember, there's a step I skipped before where I remove the x value by paying a factor of 2. So you can do it-- you can-- this is less than this sup. And there was-- all of this are almost the same as-- it's exactly the same as before. And now, you can peel off the-- you can remove the phi by the Lipschitz compensation lemma or the Talagrand lemma. So you can get rid of the phi. So this is Talagrand lemma. And then, you can-- then this becomes the complexity of the linear model. And you do some things. And then you can get the same thing, 2 B, C over square root of 2, where the C comes from the norm of the xi. So basically, from after here, these are-- so this part, the same as before. I guess there is a small difference, which is that the U bar now is normalized to non-one. So you just have a-- so that's why you don't catch up. So before, if you look at the proof before, what happens is that you don't have the-- you have some other control of U bar-- control of the U. You know that norm of U is less than BU. And now you know the norm of U bar is less than 1. So that's why you don't have the BU show up in the final bond. Because the norm of U bar is less than 1. So in some sense, this is just almost the same proof. Which the only difference is that you somehow remove the scaling of U first. You put the scaling of U actively in the w so that you can organize this a little bit better. Any questions? OK. Cool. Great. So I think next, let me talk about some of the kind of the implications of the theory here. Some of them are kind of interesting. So I think one thing is that if you believe in a theory, then what directly we should do is that-- this is not what people do in practice, but I would argue this is also close to what people do in the practice. But if you just believe in a theory, what you probably would do is you want to define the following max margin solution. You want to do the max margin on the minimum norm solution. So I guess you can do maybe either do problem one, where you minimize the complexity of C theta. And with the constraint that the margin is larger than 1. Then why we care about the margin, recall that all of this depends on the margin eventually. Because eventually, your generalization error will be the complexity over the margin. Or alternatively, I think these are exactly equivalent. So you can say that you maximize the margin. And with the constraint that your complexity is less than 1. So let's call this program two. And we can probably define this to be gamma star-- that I probably don't have to define now. So we can do these two programs, right?. So and these two programs, the reason why you want these programs is because your generalization error bound will be something like the generalization error will be something like L theta, will be less than L theta hat, will be less than C theta hat over gamma mean theta hat over square root of 10 plus low order terms, right? But this is using the general machinery that we had, right? So you have 1 over square root of 10. So you basically have the Rademacher complexity. This is the Rademacher complexity. And sorry. I mean, this part is the Rademacher complexity. This corresponds to the Rademacher complexity of H, right? And this is the margin. So that's what we got from the margin theory. [INAUDIBLE] Is there any [INAUDIBLE] difference between [INAUDIBLE]?? It just seems like [INAUDIBLE]. I think, depending on-- I think you are basically right. But I think I would say this is already something, we already achieved something. Because this-- I think maybe the right way to think about this is that you compare this to in two things, so in the very idealistic way. So for all the wj's are the same, all the Uj's are the same, then these two bounds are just the same. So then you are right. You are just voting-- you are just changing on form of a bound and nothing really changed, right? But you somehow fold the square root n somewhere. But the thing is that this is not tight always. And you probably shouldn't expect it to be tight. It shouldn't be the case that all the wj's and Uj's are the same. You probably should have a decaying wj. As you have more and more neurons, you're going to have smaller and smaller wj. [] There's no way that this is tight for all n, right? So it can be tight for all the-- for one n. But if you had more neurons it wouldn't be tight. So the typical thing would be that as you have more and more neurons, these neurons probably should have smaller and smaller norm. Because they are capturing more and more kind of complex subtleties in your function, in your ground choose function. So basically I'm saying that this inequality wouldn't be tight in the idealist-- in the ground truth function, for example. Right. So yeah. But from a very technical point of view, I think that's we only did a very small trick to change the form. Yeah. So this [INAUDIBLE] the other problems [INAUDIBLE] that [INAUDIBLE]? Yeah, I think you can say that in some sense, yes. Or at least that the other bond would be-- yeah, I guess depending on how you think about this. Yeah. But I think the way I think about this that is really just-- the way I think about it is that these two bounds are exactly the same when all the wj's and Uj's are the same. They are all, for example, constant or something like that. Or maybe all 1 over square root of n. So then you don't get anything from this, right? So but it would be much different if you want to find a function where your wj and Uj goes to 0 gradually as you add more and more neurons. OK? So going back to the transition bound. So I think the transition bond in some sense motivates the use of this kind of max norm solution or the minimum norm solution. Just because eventually your Rademacher complexity depends on the complexity of the model. And you also have the margin term from the margin part, from the last part. So and one of the interesting thing is that this quantity, if you think about this quantity, and this quantity you can show this is not increasing as n goes to infinity. So and the reason is actually pretty simple. So but maybe let me write it down just to be clear about what I really mean. So let's use, say, the hat n to be the minimizer of, say, program one. So and n is to index which-- how many neurons you are using. So for every n you have a minimizer. And you can define gamma n star to be the solution of the-- to be the corresponding margin. You can define gamma n star to be the margin with the constraints that C theta is less than hat. So let's say define this to become n star. So I think I want to define this to be true. So let's mostly use 2 as our main thing. This is a little bit typo here. So suppose you solve this problem 2, and you get this maximizer solution. So and then, so your bound-- so this means that the bound is C theta hat n over gamma min theta hat m over square root of n. And because we normalize the C, the complexity to be 1, so this is really 1 over gamma m star times square root of n, right? This is the generalization bound. So basically whether this bond is better or not depends on whether gamma n star is increasing or decreasing. And interestingly, the gamma m star is increasing. And this is in some sense almost by definition. Why? This is because if you think about what that-- the gamma star, m star means, it means that the maximum margin you can achieve when you restrict your complexity to be less than 1, right? and also use n neurons. And the thing is that when you have more neurons, at least you would achieve the same margin. You shouldn't be worse. Just because the only-- so with more neurons, you never get worse. Can at least achieve the same margin by adding just a dummy neuron as exactly the same argument as I had. At least achieve the same margin. Because you just add a dummy neuron, and it doesn't change the functionality, it doesn't change the complexity, it doesn't change the margin, it just everything is the same. But having more neurons give you additional flexibility. You could possibly change your neurons a little bit more cleverly instead of just adding a dummy neuron. That's why you margin-- adding one more neuron will potentially make your margin bigger. So at least, you never get in the margin smaller by adding neurons. So that means that this bond can decrease as n goes to infinity. At least it's not increasing as n goes to infinity. So in some sense, this is the nice thing about this compared to other bonds, where you have explicit dependency on n. If you have an explicit dependency on n, at least if you just look at it, you wouldn't be able to argue that this bond is better. So now you can say this bond is better as n going to infinity. Of course, this doesn't really say-- this doesn't address everything, because this is just upper bound. It's not like you are saying that the actual generalization error is decreasing as n goes to infinity. That would be the ideal theorem of the proof, ri ght? That would match exactly the plots I showed last time, where you have more neurons and your accuracy is improving or your error is decreasing, right? So here you're only talking about bounds, right? So if the bound is loose, then it's unclear whether this decreasing in m thing is really a big deal. And that's indeed true. So but I think this is a, in some sense, this is a starting point, right? So if your bond is increasing the m, that is completely useless. Your bound is decreasing m, that doesn't really mean that it's super powerful. But at least that's a good sign to have, right? That's a good thing to have. And in some sense, it's really hard to capture the exact test error. So if you really want to say that the exact test error of the generalization error is decreasing in m, that's basically the only thing you can do is with linear model. At least so far, the only technique I know is that you just literally compute exactly what the test error is. On linear model you can do the analytical derivation using linear algebra to simplify them. And in certain cases, you can show indeed the error is decreasing as n goes to infinity. This is actually a pretty popular direction in the last few years. People have done this for various kind of linear models. But basically, only restricted to linear models. Right? So here, we want to work with neural network. So we have to somehow live with the weaker result. You only say that the bond is decreasing but not actual error is decreasing. So I guess, the next thing I want to say is that is actually-- another thing that this is different from this program well too, they are still different from what you do in practice. You'll probably don't do exactly this complexity measure. Nobody regularize it like that. Probably somebody tried. It probably wouldn't make a difference. And here, what I'm going to say is that actually it's interesting that this complex measure is definitely different from L2 complex measure, right? But once you minimize the complex measure, you get the same effect as minimizing the L2. Or minimizing the L2 is the same as minimizing this. Maybe let me just clarify what does that mean. So basically, my main point here is that if you maximize margin, sorry. You can-- so can be done by minimizing the cross-entropy loss, the one with L2 regularization. So here I have two things. One is I'm using cross-entropy loss. And the other is I'm using L2 regularization. I'll do one of them as that. So the first, I'm going to do first use L2 regularization instead of the complex measure I defined. And I'm going to say that it's actually doing the same thing. So here is this first lemma. So suppose you consider the one we have considered, right? So let's call this J1, which is you minimize the complexity with the constraint that the margin is larger than 1. By the way, I keep changing the-- sometimes I'm minimizing the complexity with the margin, sorry. And sometimes I'm minimizing the margin with the complexity, so that's the one. So somehow, I probably should make them all consistent. But just in my mind they are always the same. So sometimes I forgot to-- sorry. I should probably just keep a single version of it. But they are the same. They are just equivalents because-- yeah. So anyway, so here, I am minimizing the complexity with the margins larger than 1. And I'm claiming that if you look at another one, which is you minimize the L2 norm, and with the constraint that the margin is larger than 1. So these two are the same. So obviously, these two functions are not the same. There's two complex measure are not the same. But if you minimize the complexity, the extreme point actually turns out to be the same, which is kind of interesting. And the proof is like follows. So I think at least one thing you know is that the L2 regularization, what is that? This is L2 regularization is the sum of the squares of all the parameters, which is sum of wj squared plus sum of Uj 2 norm square. And you can show that this is larger than the complex measure we have defined, because you can use the am, gm. So you can say this is wj squared plus Uj 2 norm squared. And you use-- I think this is called AMG, I mean, inequality of-- for me, everything is Cauchy-Schwarz, so JC inequality. So anyway, so you get wj times U2 2 norm times 2. And you cancel these 2. So this is B theta. So you are minimizing-- so in J2, the program J2, you are minimizing a larger complex dimension. But I guess the intuition is that even though you are minimizing a larger complex measure, but when you-- the extremal point actually will make these two things the same. So the intuition is that the extremal point should satisfy-- should satisfy wj is equal to Uj even when you are minimizing the L2 regularization, right? So and if that's the-- suppose that's the case, then you can believe that these two things are the same. Because when I'm minimizing the L2 regularization, if the extremal point is-- satisfies this, then for this case, if this is true, then C theta is the same as the L2. So then, you are not really doing anything different. So that's kind of the intuition. If you really want to prove this kind of formally, I guess the simplest way to prove this is the following. So you say that, I guess this implies that J2 is larger than J1. And you want to use the intuition to show that J1 is bigger than J2 instead-- it's larger than J2 as well. So what we do is that we say, let theta be the minimizer of 1, of J1. Maybe let's call this-- I think let's call this maybe 3 and this is 4. So is that a good number? That's probably not a good number. Let's call this P1 and P2. So minimizer of P1. And then, what you do is you construct. So you get a theta that is the minimizer of the first one. And you want to construct a theta prime which is very good on the fact-- in terms of the second program. So you construct a theta prime. And what you do is that you say, I'm going to take wj prime to be the renormalized version of wj. And Uj prime again, to be the renormalized version of Uj. And then, you can verify that because I'm just changing scaling, Uj times phi of wj transpose x-- is the same as-- sorry. wj times v of Uj transposed are actually the same as before. And also, wj in terms of the complexity measure, they are also the same after doing this transformation. And this means that C theta is the same as C theta prime. And f theta is the same as f theta prime. So the functionality and the complexity measure didn't change. And what's interesting is that for theta prime, C theta prime is also equal to the L2 norm. Because my construction-- OK-- why I'm doing this construction? I'm doing this construction because I wanted wj it to be equal to the norm of Uj. This is why I chose this scaling. Anyway, I think this should be like this. Oh, sorry. No. Am I right? Oh, no. It's like this. So we can verify wj is the same as Uj 2, this. So because this is actually my design in some sense. You can verify this. But this is-- if this is true, I should change my designs to make it true. But that's the point. So that means that-- so what does this mean? This means that theta prime satisfies constraint of-- so all of this means that theta prime constrains of p2. So that means that C theta prime is less than-- or is bigger than J2, right? And C theta prime is equal to, n theta to the prime is equals 1/2 theta prime squared, is equal to 1/2-- sorry. C theta prime is equals to-- OK. What is this equal to? This is equal to-- let's see what's going on here. Then I want to show that theta the prime is equal to J1. This is because-- all right. This is just because the problem is equal to C theta, which is equal to J1, OK? I didn't change the complexity measure because I'm just rescaling. So that's J1 is bigger than J2. And before you got J2 is bigger than J1. So that's why J2 and J1 are the same. Yeah. Actually I was a little hesitating whether I should show this proof or the more intuitive-- another version which is actually in the lecture notes. In the lecture notes there's a-- it's a different way to prove the same thing. At the end of it, everything is relatively simple. It's nothing really hard. So this proof is very easy to verify. And the other proof is kind of in some sense carries the intuition. And intuition is really just what I said, at the extremal point anyway wj and Uj 2 norm has to be the same so these two complexity measures are not different. So that's the manual intuition. [INAUDIBLE] Theta prime satisfies the constraint p2. So the constraint is only about the margin, right? So the margin is only about the functionality of this model, right? So if you predict the same thing, your margin will be the same, right? So theta prime and theta have the same functionality because you only rebalance the scale, right? You just multiply wj by something and divide Uj by something else. So the functionality is maintained. So that's why the margin is the same. In the first order proof proof, why is [INAUDIBLE]?? In the why there is no-- In the first order proof when you just-- you pull out the sum? Here? When you need it? Yeah. So here, this is the equality? No, the line below it. Oh, sorry. Sorry, sorry, sorry. This, not that. Why this is equality and this is inequality. I got it. OK, cool. Great. So the first thing, so the first lemma we have shown, what we have done. We basically are saying that if you minimize the L2 norm, it's the same as minimizing this complex measure, OK? And we also wanted to do the cross-entropy. And this is something I am not going to prove. But I'm just going to state the lemma. And if you're interested, you can read a paper about it. The proof is actually relatively simple. But I think we won't probably have time today to do that. So the lemma 2 is that if you consider a regularized cross entropy loss, and something like L hat lambda theta, which is equal to 1 over n times-- I guess in this lecture this is the first time I have ever talked about what is cross-entropy loss. But I assume that you somewhat know what they are, right? This is the loss for logistic regression where you have yi times f theta xi. So this is the input. And the loss is the log of 1. So the loss in some sense is really log 1 plus exponential minus t. This is the logistic loss. And you add some lambda times L2 regularization. Suppose you do this. And let's say, let theta lambda hat be the minimizer. I'm going to claim that for small enough lambda theta hat lambda is basically doing the same thing as the max margin solution. But there is a small thing that I have to deal with, which is that what is the norm, right? Because the max margin thing is-- you need a norm. You need to basically-- you need to cover the ratio between the margin and the norm. So that's why my statement is that-- again, I don't know why-- OK, here. So then, my statement is something about this. It's like this. So basically, you can say that if lambda goes to 0 for small enough lambda, then the norm versus the margin will go to J1. J1, which was defined to be the max-- the minimum norm solution, right? This is just I'm recalling the definition. So basically, you are converging to the max margin solution, or the minimum norm solution up to a scaling. Because you are looking at a ratio. So this, when you have a very small theta, you-- sorry. When you have this very small lambda, your normal theta would be something actually pretty big. That's because your organization is too weak. So you are not going to get very big norm solution. But if you normalize the norm with the margin, then you found that this is actually the max norm solution. I'm not going to prove this. I guess if you're interested, this is a theorem 4.2 of a paper I wrote with two collaborators. And actually, this theorem is actually very simple. And actually, it works for not only the L2 regularization, it works for almost all homogeneous-- or almost all regularizations you can think of. So the gist is basically saying that if you care about the max margin solution with respect to certain complex measure, so the complex measure could be L2. In this case, it could be something else. Like here, it could be anything, right? So one way to achieve it is that you just add a very weak regularization in the cross-entropy loss. And that will give you a max margin solution. OK. Any questions? [INAUDIBLE] Yeah. [INAUDIBLE] Yeah. So the general kind of gist is that suppose you care about the max margin solution, right? But max margin solution requires a complex measure, right? You need to say I'm minimizing the norm-- such a norm with the margins larger than 1, or I'm maximizing the margin with some constraints, right? There's a norm or there's a complex measure. So if you want to get a max margin solution, you just put a complex measure in the-- at here, right, so and with a small enough lambda. Then you have a cross-entropy loss. And then the solution, this way will give you the max margin solution. Of course, you can look for the max margin solution just directly by solving the program. But you can also do it this way. And this is something that seems to be more typical. At least, this is what people do empirically all the time in some sense, right? In some sense, this is just linking what people do empirically with the max margin solution, which is not what people do, not typically in deep learning. But there is a-- but the caveat here, you care about the broader interpretation, the caveat here is you need the longitude very small. So basically, this is saying that if you use a very small lambda, you get a maximum resolution. But empirically, actually, you don't use that small lambda. You actually use something bigger than this infinitesimal small lambda. So empirically, you probably wouldn't get exactly maximum resolution. You're going to get something similar to it, but not exactly the same. And actually, it's kind of interesting that, I guess, probably for CS 239, you have learned a max margin solution. So it sounds like before deep learning that's the right thing to do, right? But even if you look at linear models, it's never-- at least I haven't seen. I'm not a practitioner. I do a lot of theory. But when I do experiments, I've never seen that max margin solution is the best for linear model. Somehow it's like when you use a very small lambda, you do get max margin solution. But if you use bigger lambda, sometimes it's a little better. So I think max margin solution in some sense is just a theoretical approximation of what people really do in practice. All right. So let me see. So the next part I'm trying to connect this deep learning thing, this deep learning not very deep, like two-layer network thing with the so-called L1 SVM. This is also kind of like-- I think people-- the exact thing is in my paper as well. But it's only three paragraphs in the appendix. And we are not really inventing it. We just in some sense said something that people already knew implicitly. We thought that it's useful to write it down. So the general thing is that you want to say-- what we claim is the following. So we want to claim that this is-- we are doing-- what neural network is doing with this two-layer network, and the max margin solution, it's really just doing something like L1 SVM in some kind of feature space. So but let me explain. I guess I haven't defined what L1 SVM is. So you're probably familiar with SVM. That's the so-called L2 version of the SVM. So here you are going to have a slightly different version of SVM. So the idea is that first of all, let's look at infinite number of neurons. Because we have claimed that more neurons is always better. So why not think about infinite number of neurons? And let's see what infinite number of neurons will do for us, right? So you look at the max margin, where you have infinite number of neurons, this is the largest possible margin you can achieve with even infinite number of neurons. And suppose that this is achieved by U1, so on, so forth. You need infinite number of neurons, probably. So many neurons. Actually, you can achieve this without an infinite number of neurons. You can achieve it with, I think n plus 1 neurons, any number of data points. So but let's say you have infinite number of neurons. Just like basically, infinite is not very different from n plus 1 neurons. As long as you have more than n plus 1 neurons, you don't really get anything more from this. So and again, U bar is the normalization of U. So I think we have kind of played with this a lot of times. This is equivalent to w1 U1 2 norm, U1 bar, so on and so forth, right? This is what we have. Let's call this theta-- I think I'll call it theta tilde in this case, and I call this one w1 tilde. So we have then this rescaling a lot of times. And we know that if you rescale this, you don't change the complex measure. And then here, the complex measure is the wj Uj 2 norm. And this is just the wj tilde absolute value. So this is the 1 norm of wj-- w tilde. So that's where the 1 norm comes into play. So basically, the idea is that after you change this viewpoint, basically you just view w1 and w1 tilde as the variable. And then, you are doing some kind of sparse linear regression or sparse SVM. So formally, what you can do is that you can pretend so every U in the sphere-- on the sphere Sd minus 1 is the d-dimensional sphere. So you pretend every U in the sphere shows up in the collection of U bars. Once you pretend that every U-- why this is possible? This is just because adding more neurons is never a bad thing. But you add a lot of neurons to the Ui's, and you just set those corresponding-- this is just because you can add neurons at Uj and 0. You just add this things, it never changes anything. If you don't see any neurons in this collection, you just add this neuron into the collection. And you add 0 as the coefficient. It doesn't change the functionality. It doesn't change the complex measure. So that's why you can pretend that the collection of u1 up to Un, and I guess there is also more-- you have infinite of this. It's really just a collection of all the possible unit norm vector on the sphere. That doesn't really change anything. And once you have that, then-- so once you pretend that this Uj bar is just equal to Sd minus 1, then you can take a continuous perspective. You can say that this f theta tilde x is really something like sum of-- if you write this discrete version, you get this. But you can say this is-- you can think of this as a continuous version, where for every U bar you have a w. And you are integrating over all the U bar. I'm not sure whether this make any sense. This is the simplest way that I came up with to define this without talking about what's of-- I think this is one way that I came up to explain this without too much jargon. But I, of course, I don't know whether this works for everyone. Again, in the lecture notes, there is a slightly different way to introduce this, which requires a bit more jargon. I don't know. Any questions? [INAUDIBLE] Sorry. My bad. Why did I write sigma here? Phi. Yeah. All right, sorry. OK. Yeah, feel free. So [INAUDIBLE] number of neurons [INAUDIBLE]?? Right, OK, yeah. Can I have uncountable-- So the question is whether we can have uncountable for the number of neurons? Here, this is not super-- this is just a concept, for example, the same question-- you could ask the same question about why you can-- when we define the integral, actually, you are using the countable number of discretizations. And you take the limit, and you can still get something uncountable. So this is kind of the same thing. And also, in some sense, eventually this thing-- it's just a language, in some sense. It's not really like you implement the integral in practice. So yeah. That make some sense? OK. So and basically, I think if you see this, this is kind of like-- what you can-- the way you can view this is that you can say this is w tilde inner product with a phi. And a phi acts as a universal feature map. So basically, you think of this-- each of these is a feature. This is a feature. And this is like the coefficient in front of the feature. And this feature is-- now the feature is-- the difference between all-- the difference here is that this feature is a predefined feature. It's no longer something learned. Because you have all the possible U bar in this world in your feature set. So basically, you can view this phi acts as really just this gigantic feature-- phi feature vector, where you have all the possible U bars in your feature set. So and w tilde is the coefficient in front of feature. So basically, if you take C as doing that, and this is the feature in the kernel part, and this is the theta, or the weight vector, or the parameters in front of features. So now it's a linear function in the features. So but the thing is that the complexity measure corresponds to, as we argued, corresponds to w tilde 1 norm but not 2 norm. So this is why the max margin with the C theta is less than 1 corresponds to max margin with this L1 norm max margin. So basically, you can think of this as the-- so the corresponding questions you are maximizing the margin. The margin is the same. So w tilde phi xi, take the mean over i, and max over w tilde. And then with the constraint that the 1 norm is less than 1. And this is called L1 SVM with feature phi. So and the difference from the SVM you learned in the, for example, CS 239 would be that this is 1 norm but not 2 norm. So it's not doing just a simple kernel SVM. It's doing something different from that. And the interesting thing is that the L1SVM is actually not implementable with infinite number of features. It's not implementable. So when you take CS 239, one of the message we had is that when you use the kernel trick, you can actually work with even dimensional feature. Because you can change-- everything depends on the kernel, the inner product of the feature. So you don't really care about dimensionality of the features. So you can work with infinite dimensional feature. But here, you don't have that kernel trick anymore. So if you have the L1 constraint, the kernel trick doesn't apply. The final solution is not just a function of the inner product of the features. So you cannot apply the kernel trick. So that's why you cannot implement it with the kernel trick. So this part is purely for understanding. This is saying that, OK, neural network is doing something more than what you can do with kernel. Because now you are effectively doing a L1 version of the kernel problem, which was not able to do-- which is something you are not able to do with the standard kernel trick. You have to use the neural network to achieve the same thing. So did we prove that l1 SVM is not implementable or is that just sort of an effect? Yeah, we didn't prove that L1 SVM is not implementable. But I think the-- how do I say this? I guess, how do you prove that it's not implementable? You have to have a-- you have to say what do you mean by implementation. So this is just really just-- we don't know. Maybe the easiest way to say is just, we don't know how to implement it. But it sounds like very unlikely to be able to be done. Also on the other side, on the flip side, for neural networks we are saying that you can implement it. So basically, here is saying that you can effectively use neural network to implement this L1 SVM. But the caveat is that you still don't know whether you can optimize the network. So it's not an end to end result. It's saying that if you assume you can optimize your neural network efficiently and up to global minimum, then you can solve the L1 SVM. But there is a caveat about whether you can really computationally solve neural network. That's something we don't know how to do. We don't know how to prove theoretically. Empirically it sounds like true, you can do it just by gradient descent. OK. So I think this is all I wanted to say about the two-layer network. Next we are going to talk about-- our goal would be-- the next goal would be to prove something about multiple-layer network. And we need more tools. So my plan is to spend the next 10 minutes to talk about some of the tools. And we need to continue about the tools in the next lecture. And then we can talk about how to have better bounds for multi-layer network. But if there's any questions, I can talk about that. I can answer any questions first. It's a little bit awkward. I thought I have 20 minutes. But there is only 10 minutes. But still, I think it's OK. We can start with the simple thing. But it will be kind of like a-- at least for the moment it will be a quite different mindset. We are thinking about the tools again. OK. So now we are talking-- we are getting back to how do we bound Rademacher complexity. And we are talking about the different type of tools. And let's recall, maybe-- OK. I guess before doing that, maybe let's think about a function space view of the Rademacher complexity. So maybe let me write down the Rademacher complexity first. So this is something like if you have a function of class f, a Rademacher complexity is-- this is the empirical Rademacher complexity. So if f is equal to Z1 up to Zn. And let's think about-- let's define the following set Q. This is a set of vectors. And the vectors are the outputs of f on these endpoints. So for every function, you're going to have a vector, n dimensional vector. And so this is basically the set of outputs of f on the data points Z1 up to Zn, right? These are all the possible outputs you can get from applying f on this set of points. And they are vectors. And then, you can rewrite this as the Rademacher complexity as the following. So you can think of this as you are looking at all the possible vectors V in Q. And you look at inner product of sigma with v, right? Just because this 1 over n sigma v is really just the sum of sigma i vi, which is the sum of sigma i f Zi. This is just a rewriting. So the point here is that this RSF only depends on Q. It depends on the outputs. But for example, but not as opposed to the-- for example, the parameterization of f. Let me explain what I mean here. So for example, let's suppose you have function of class F, which is F of x is equal to something like sum of theta i xi. Where theta is in dimension d. And suppose you have another function class, f prime, which is of the following form, say something like sum of theta i. I'm writing something, where theta is in d and w also is entirely in d. This is just a weird example just to demonstrate a point. So suppose you have this two function class. And they have different parameterization. They have different parameter space even, right? So one has d-dimensional parameter space, and the other has 2d dimensional parameter space. But these two functions have the same Q-- corresponding Q. Because they are the family of outputs are the same. Because in some sense, you can have a one to one match between one-- a function in capital F and a function in capital F prime because they are just-- for every possible outputs that can be output by the function F, you can also find the one that can be output by some function in F prime. So they have this different parameterization. But they have the same functionality in some sense. Or it's the same family of functions. And they have the same Q. And then that means they have the same Rademacher complexity. So I guess, I'm just trying to reinforce this idea that the only thing that matters is the outputs of the functions, but not how the functions are represented or parameterized. And this would be useful as a general thing. It's kind of like a change of mindset. So before you are talking about the parameters, right? So what are the parameters of F? How do you discretize your parameters? From now on, we are going to get rid of-- we are not going to think about the parameters that much. We are more thinking about the outputs of the functions. And there's a so-called Massart's lemma, which is actually one of the things you are asked to prove in the homework. So this lemma is saying that if this Q satisfies that Q is, first of all, so I guess maybe let's say for every vector V in Q, the two number of V over square root of n is less than n. So this set of Q contains only bounded vectors in this sense. By the way, from now on, we're going to see these things very often. Just because you want to normalize your vector. You measure the vector by the normalized norm. So the norm itself doesn't matter that much. You want to normalize the norm by the dimension of the vector. [INAUDIBLE] Right. So this is like the range from theta [INAUDIBLE]?? Right. That's right. But I think this is actually a very good question, which I probably should have talked about earlier. So I think probably I mentioned this a little bit at some point. So one of the nice thing about empirical Rademacher complexity is that now you are in this mindset that your Zi's are fixed. So you don't have any randomness in Zi's. They are just the endpoints fixed there forever. And of course, the functions can be changed when you have a family of functions. But you don't have a changing Zi's. So that simplifies things a lot in some sense. And so, in some sense, you can think of this even in some sense, you can think of the functions-- the family of functions are functions that map Zi's to real numbers, but not functions that maps Rd to real numbers. You forget about any other points. This will just have endpoints. And all your functions can be just represented as n numbers, which are the outputs of these endpoints. There's no other point you have to care about. That's kind of the beauty of the Rademacher complexity in some sense. That's why it's powerful. Because you, before the Z is the source of the randomness. But now the randomness come from the sigma. So that's why you can fix Zi's. So is this going to be a statement about if you have-- as long as you have more than [INAUDIBLE]?? As long as-- You have [INAUDIBLE]? Because then you can [INAUDIBLE]?? I think you are-- the exact same problem is not what you said, but you are in the right direction. So basically, the Rademacher complexity depends on how complex this set Q is. That's what I'm going to say. And actually, the next time you can see that-- I think we have actually mentioned this before. So if Q is not very complex. For example, if Q is a finite set, then you have good Rademacher complexity. And of course, how do you measure the complexity of Q, that's a little bit of a-- it's a question that we have to study. But there are, for example, if the Q is finite, then you have a bond on Rademacher complex. That's what I'm going to write. So suppose, so you need two things. One is that Q is finite, and the other thing is that Q is roughly bonded. And there's two things that you have that this expectation over sigma of this thing, which is equivalent to Rademacher complexity, is bonded by square root 2 n square log Q over n. So the sign of Q come into play. And I guess as a corollary, I think this corollary is something I have presented before with other proof. If F satisfies that this function is bonded on this Zi's in the following sense, the average output is bounded by m square, the average output square is bonded by m square, then the Rademacher complexity of F is bounded by 2m square log F over n. OK. So that's the relatively easy thing where you have finite hypothesis class. OK. So and this is a homework question. I thought I made the homework question. I think there's a hint, which is actually pretty important, which is you should consider using something about the moment generating function, which will make the math easier. Actually, there are two ways to prove it. The other way is that you do this quantization plus union bound. And that will give you a-- you will have a relative hard time to do that, just because the constants are so hard to make-- you can work out a similar bound, but just a little messy. The moment generating function is very really cool. The proof is actually pretty short if you use the right way-- you use it in the right way. OK. So I guess, let me just briefly give a quick overview of what we're going to do next. So that you can appreciate why I'm setting up the things here. So the next thing is that what if Q is not finite? What do we do, right? So and our answer would be that you do some discretization and class union bound. So basically you have some epsilon covering stuff, and you have a union bound. Or maybe I should say I have discretization to reduce to the finite case. That's basically the idea. And you probably have seen this idea before, right? You have seen it in the third lecture maybe when we talk about infinite hypothesis class. But here, the difference is that here you are discretizing the output space, the set Q which is a set of n-dimensional vectors. So before you were discretizing the parameter space. You already have a d-dimensional parameter space. You discretize that. But here you are doing a more and sometimes fundamental discretization. Because any way the parameters are argued is probably not the most important thing, right? What's really important is that what's the functionality of this family of functions. So now you are discretizing in the right space, your more fundamental space. And this is the space of the outputs. So what we will do is, we're going to discuss a few techniques to discretize this Q and what kind of discretization you really need, so and so forth. So and there's actually some kind of a pretty deep theorem, which is called the Dudley chaining theorem. Which actually requires you to discretize in your nested way. You have a hierarchical discretization, so that you can have the best discretization. So this is something beyond what we have done before. Even you don't care about the difference between the output space and parameter space. Here you can discretize in a much more efficient fashion. So that's what we're going to do next. And then we're going to use this for the multi-layer network. Sounds good. I think that's all for today. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_17_Implicit_regularization_effect_of_the_noise.txt | OK, cool. Let's get started. So I guess today we're going to talk about implicit regularization of noise. And the plan today is that because this is a pretty challenging topic and I think the research community is still, in some sense, doing research on this-- so we have some results. It's pretty complicated. So what I'm going to do is I'm going to somewhat-- using a relatively heuristic approach. So I'm going to try to convey the main idea without doing the actual rigorous statement. So in this lecture, I don't even think I have a formal statement to state because it's just a little bit too complicated and unnecessary, right? So if I really proved the formal version of the theorem, that probably would take two lectures or three lectures. So that's why instead I'm trying to kind of at least convey the main intuition why the noise is useful still with some math, because without the math, you don't even see the intuition sometimes. But the math wouldn't be always rigorous, and I would not know where it is kind of like not rigorous. And also, some part cannot be made rigorous without additional assumption. And I'll be-- I am clarifying that. And so some part is really just for convenience. I ignore some kind of the jargons, but they can be fixed by just doing more careful math. And some part is actually fundamental challenges, and you have to really use additional assumptions or maybe even change the problem setting to go through those steps rigorously. So I guess the main portion of the talk, the lecture is actually not about any particular loss function. It's about generic class function. We're going to make some special simplification for them, but you don't even need to really think about parameterization in most of this lecture. So the setup is that we have a loss function. Let's call this function g theta. And I'm also going to use x as the variable at certain cases. So the stochastic gradient descent algorithm-- by noise, I really mean the noise in SGD. The stochastic gradient descent algorithm that'll we'll analyze is something like this. So theta t is equal to theta t min. Theta plus 1 is equal to t minus some noisy gradient. So we have the full gradient plus some stochastic noise, where the expectation of this ksi t is 0. So this is really a mean 0 noise. But the distribution, ksi t, in the most general case, the distribution of ksi t can depend on theta t. Right. So the noise distribution depends on which point you are evaluating at, right? So you can see this formulation at least so far, at a very general level, does capture, for example, stochastic gradient descent, as you usually know, like the mini-batch stochastic gradient descent. Because suppose you take a mini-batch gradient with a few samples, then it is indeed can be written as something like the full gradient plus a stochastic variable, which means 0. But we are not going to analyze that particular version because then noise becomes too complicated in some sense, right? So we're going to analyze much simpler noise in most of the cases like something like a Gaussian noise. So this is-- so strictly speaking, this is more about noisy gradient descent than stochastic mini-batch stochastic gradient descent. But they do share a lot of similarities. OK? And what we are trying to do is we're going to gradually build up our intuition about how does this noise affect our optimization algorithm. So we're going to start with various-- we're going to have several levels warmup. So the first warmup is that what if you have a quadratic loss function? So quadratic loss pretty much means that you have a linear model under the hood. But here I don't even have model parameterization. I only have a loss function, g theta. And say we have a quadratic loss function and have Gaussian noise. And also, we have 1D, one-dimensional functions, like theta is one-dimensional. So I guess here I'm going to use-- from now on, we're going to use x as my variable just to make it more consistent with the optimization literature. And g(x) let's just assume is 1 over 2 times x squared. And this 1 over 2 doesn't really matter. It's just to make the gradient cleaner. So then what's the update rule for this case? You are optimizing a quadratic function basically, which we are at global minimum is at 0. But you are using stochastic gradient descent or gradient descent with noise. So xt plus 1 is equal to xt minus eta within g of xt plus some noise. Let's say the noise has some scale sigma and multiply ksi t. And ksi t is not going to have a scale, so it means 0 on standard deviation 1. So basically, a noise has standard deviation sigma and Gaussian distribution. And now, let's compute the gradient. The gradient is really just a xt, right? The gradient of 1 over 2 times x squared is the x. So it would be plus sigma times ksi t. This 1 over eta xt minus eta times sigma times ksi t. This one is not working. Maybe that might have-- so what's happening here is really that this is a contraction, meaning that if you have xt, then it contracts a minimum of make the xt smaller by a factor of 1 minus eta. And this is the stochastic term that may make x get bigger or smaller, depending on whether you are lucky or not, right? So in some sense, what happens is that-- so the interesting thing is that if-- so when xt is large, the contraction is dominating The contraction, or the shrinking, is dominating, right? Because for example, suppose your xt is here, then what happens is that you first contract it to somewhere here by multiplying 1 minus eta. And then you add some stochastic noise. So then maybe at the end of that, you may end up somewhere near the [INAUDIBLE],, right? But still, largely speaking, you are moving towards 0 because of the contraction, or the shrinking just because the shrinking is doing most of the work. However, so let me finish-- so the contraction dominates. However, when xt is small, or maybe xt is very small, or maybe xt is 0 for simplicity, then the noise is dominating the process. I guess this-- suppose you start somewhere very, very close to 0 then maybe shrink it. It doesn't really change much because 1 minus eta times a smaller number is still a small number. And the noise probably make it somewhere either on the left-hand side or the right-hand side. So the noise becomes the dominating part when xt is small. And eventually, basically, you are going to basically converge to this second case to some extent. Because if xt is large, then you are moving towards 0. And what happens is eventually xt becomes somewhat small, and noise is kind of like governing all the process. So eventually, you are just bouncing-- you're bouncing just around globally on a certain level, right? So you cannot bounce around on some very, very high-level values because then your contraction is too large. You wouldn't be able to bounce around that level very much. So eventually, you will be bouncing around on certain level, depending on the noise level. Right. So it's kind of like what happens if you have, I guess, if you think about you drop a ball in a kind of a concave kind of thing without any friction. It's not exactly the same because there you don't have really additive noise, but you still see this bouncing around eventually just because you can overshoot a little bit. Yeah, maybe that's not exactly the right analogy but anyway, So eventually, you will bounce around some-- the valley of a certain level. And how do we kind of precisely-- by the way, this sounds like nothing really to do with implicit regularization because eventually, whatever you do, you always stay close to a global minimum because there's no even two other-- there is no two global minimum, right? But the intuition is very useful for future things when you kind of move away from this. So this is indeed important. And let's try to be more precise. And this is actually our case that we can be precise. So we can solve the recurrence. So when we solve the recurrence, what happens is xt is equal to 1 minus eta xt-- xt plus 1 is equal to 1 eta xt minus eta sigma ksi t. And then you plug in the definition of xt again. 1 minus eta 1 minus eta xt minus 1 minus eta sigma ksi t minus 1 minus eta sigma ksi t. And if you rearrange this, you get 1 minus eta squared xt minus 1 minus 1 minus eta times eta sigma ksi t minus 1 minus eta sigma ksi t. And if you do this for another level, you get 1 minus eta cube xt minus 2 minus 1 minus eta squared eta sigma ksi t minus 2. And if you do this more and more, eventually, what you're going to get is 1 minus eta to the power t plus 1 times x0 minus eta sigma times the summation. Summation looks like this. It's a linear combination because ksi t-- ksi k, but the coefficient in front of it is some power of 1 minus eta. So from this, you can see that structurally there are several interesting things about this formula, which can give you some intuition. So one thing is that this thing is a very strong contraction, right? So this is the contraction part, right? And in some sense, this term comes from you construct that initial value by a lot of 1 minus eta so that basically this becomes negligible. This becomes negligible when eta times t is much, much bigger than 1, because 1 minus eta to the power of t is something like e to the minus eta t. And when eta times t is much bigger than 1, then this term becomes super small. And you can also see from the other term this is, in some sense-- you can view this as accumulation, accumulation of the noise. The noise are not just adding up. The noise are accumulated in a certain way. And maybe it is easier even to look at this. So the noise that you added at last step is scaled by eta times sigma. But the noise that you added at the second to last time step is scaled by 1 minus et-- you have additional factor, 1 minus eta. And how does this 1 minus eta come from? This come from the contraction of the second last step. So basically, ksi t minus 1 is what you added in the second last step. And then because you do another gradient step-- gradient descent step on top of that, we still contract the noise a little bit, right? So maybe you can see this from here as well, right? So this is what you got from here. But this minus eta comes from the contraction in the very last step. And the same thing happens, right? So this comes from the contraction in the last two steps. So basically, every time you add noise in some intermediate step, eventually, this noise will die eventually at some point if you run for a long, and long enough time just because there is always a contraction that is applied after this noisy step, right? So that's why it was-- multiplied in front of the noise is a geometric series. And it depends on when you add in this noise, the coefficient in front of the noise becomes smaller and smaller. So you forget about the very, very long history, right? So if you add a noise at the very, very first step, it doesn't really matter because you multiply that-- so when k is equal to, for example, t minus 1. When that's the noise for c1 then you add it in the very first step. Then that noise becomes much less because you multiply 1 minus eta to the power k in front of it. Because of the contraction, that noise becomes less or less important. So that's one thing. The accumulation of the noise is kind of like-- prefers the closed history and ignore the long-term history. And another thing is that this is a sum of Gaussian. Right? Because each of these term is a Gaussian under our assumption, because ksi is a Gaussian, and ksi times something is still a Gaussian. And you can also compute the variance of this. So the variance of this is equal to eta squared sigma squared times the variance of each of the term, which is something like 1 minus eta to the power of 2k. And the point is that if you take k go to infinity, then you can know what the limiting variance, what's the variance at the end. So if k goes to t goes to infinity, you can compute the variance of xt, is roughly something like eta squared sigma squared if you replace t to be infinity, 1 over 2 eta 2k. And this is eta squared sigma squared 1 minus 1-- sorry, my writing is not very clean. 1 minus eta squared is how you compute geometric series. And this is eta squared over sigma squared 2 eta minus eta squared. But this term can be dropped because that's going to play out the term. So this is approximately just on the order of eta sigma squared. OK? So in other words, the xt-- eventually as t goes to infinity, eventually has this Gaussian distribution with mean 0. And the variance is something on the order of eta sigma squared. So I think-- so here so far, again, we haven't really talked about implicit bias yet, but I think we still already got some intuition about what's happening here with the convex case. So small iterate, small eta, means that your iterate will have small bouncing around, right? So small stochasticity of the final iterate-- in the final-- in the iterate, because your variance in the iterate xt is smaller. And you have small noise, the same thing. Also, it implies the same thing. And so basically, what happens here is that the noise only makes it harder to converge to a global min. So in some sense, if you only care about the quality of the final solution you converge to, the noise is always pertinent especially if you are willing to take t to infinity, right? So here you can see that when t goes to infinity, you see it never converge to exactly the global min, right? So you should always have some variance around a global min. And you want the variance to be as small as possible because you want to be as close to the global min as possible. And a noise is only a hurdle instead of [INAUDIBLE] anything. So this is why in the classical convex case-- so this is why the classical convex case composition, typically, when you think about noise, it's only about two things. It's only about A, the noisy gradient descent that leads to less accurate solutions. So this is the best thing. That's what we discussed. And second, noisy gradient descent is faster to compute. Why the noise come into play? This is because maybe you only sample a few examples to do the sampling, where you do empirical into a minibatch with gradient descent. So noisy gradient descent is faster to compute. And the only thing is that you are feeding off these two factors of-- that's, I would say, the typical way of thinking about stochastic gradient descent when you really think about a convex case, right? So noise is bad because it hurts your final accuracy. But you want to allow some noise in certain cases because you can compute faster, right? So you can trade off in a-- trade off in the right way. You can get the fastest algorithm eventually. And you can kind of imagine how you trade off this, right? So at the beginning of the optimization, you don't care that much about accuracy. You don't care about that much about converting to exactly the global min. You want to go to the global min as fast as possible. So that's why at the beginning, you don't care that much about noise. So that's why in the beginning, you use a large number. And then when you are already close, where your goal changes, because now your goal is to really literally go to the global min, period So then you cannot allow any noise. So that's why you have the decay order rate. So that's why there is always this kind of decay on linear algorithms. So, so far, this is about-- yeah. And also another thing is-- just a side remark, which is useful for us, which is a useful comparison for us and later. So for any fixed-- suppose you fixed eta and sigma. So the expectation of xt is always going to-- convert into 0 as t goes to the infinity, right? So even though there's a stochasticity, there's a bouncing around, your average is always at 0. So this is saying that there's no bias introduced for the stochasticity. You only introduce, in some sense, some fluctuation. Of course, fluctuation is also bad, but at least you didn't need to use any bias or systematic bias against any other directions. So that's another striking remark which we will kind of like compare with in a bit. And also another small remark is that this is also called this process. It actually has a name. It's called Ornstein-Uhlenbeck process. If you are familiar with this process in some other context, you can see this is actually doing the same thing. And we are going to call it OU process just for simplicity. This is going to be kind of a basic building block for us to analyze SGD and more complex case. OK. So we have kind of like understood the quadratic. And now let's do the multidimensional quadratic, which is not really much different. But I think I needed, in some sense, just to evolve for the sake of future, like the future steps. So suppose you have a multidimensional quadratic-- suppose you have some like g(x) is equal to one half times x transpose Ax, where A is a matrix. Dimension d by d. x is the variable in dimension d. And a is PSD. So and then suppose the noise ksi t-- now let's not assume. It's just a square root of Gaussian. Let's assume it has a covariance sigma. And then your update rule, let's say, suppose we care about this process where you do gradient descent with this stochasticity ksi t. And then this becomes xt minus eta. The gradient will just be a times xt. Then you add ksi t. And this rearranging, you get I minus eta a times xt minus eta ksi t. And you can do the same recursion as we did before, but we replace the definition of xt as a function of xt minus 1. And you do this recursively. Eventually, if you do all else, you get i minus eta a to the power of t plus 1 times x0 minus eta times i minus eta a to the power of k times ksi t minus k. And you can see, this is still the same kind of intuition. This is the contraction. Of course, this is a matrix. We are multiplying something less than 1, less than any of these. So we are contracting matrix signs. And here, this is how the noise accumulates and also the noise in the history, in the very far history. Suppose you take, for example, k to be close to t, right? ksi t minus k, this is something very, very far of our history. In a remote history, the noise becomes less important because there's a contraction term applied after noise is added. And this is-- right. And you can-- in some sense-- so this becomes a more complicated formula, but you can still somewhat do the same thing. Suppose if you-- so you can still do a similar calculation if A and sigma are simultaneously diagonalizable. So if they are not simultaneously diagonalizable, you can still do something to solve this to compute the sum, to simplify the sum. But it's going to be even more complicated. So let's only think about case of a in sigma. in the same case, they are diagonalizable. Then in some sense, you can just view this as-- view as d, different is separate OU process in the eigen space. OU process just means one-dimensional problem in the eigen coordinate system. Because when you use the eigen coordinate system, then A and sigma are just both diagonal matrices. And then you are basically just updating as if you are one-dimensional case. And in some sense, more formally, what happens is that's supposed to take A-- suppose A is UD U in transpose, where D is this diagonal matrix, which has the eigenvalue of A. And just suppose sigma is U diag sigma i U transpose. Then what you can do is that, as t goes to the infinity, the xt roughly comes from this Gaussian, which means 0, because this part got contracted. And you can look at the variance, which looks like something like this. The power of k times sigma epsilon 1 minus eta A to the power of k. This is just because we computed the variance of the [INAUDIBLE]. Expectation some matrix W ksi ksi times ksi W. Transpose the covariance of this linear transformation of Gaussian is equals to W. See? Transpose W transpose, which is equal to W times sigma. Both transposed, right? Sorry. That's how you compute the covariance of each of this term, and then you take the sum of them. And A is a symmetric matrix. So A and A transpose are the same. And you can do this, and then you can simplify this when you have the eigendecomposition. So you have the eigendecomposition. Then i minus eta A is U times diagonal 1 minus eta di U transpose. And sigma is U diag sigma i U transpose. And then you can compute this. So you can compute this sum. It could be something like eta squared times-- I guess maybe that's also through the-- let me just do this. If you look at a kth power, you just multiply the k in front because the U and U transpose would cancel if you put the sequence [INAUDIBLE]. So then this becomes eta squared times sum of U diagonal-- I guess I should assume this is sigma. Let's assume this is sigma squared just to make it nicer looking sigma squared times 1 minutes eta di to the power of 2k U transpose, right? That's this matrix. And then this is the beauty of eigendecomposition because everything becomes on a diagonal, and they have u squared eta squared u times. Then this becomes you take the sum. So the infinity-- because i from 1 to infinity would-- this is k. The i is for the coordinate. The k is for the-- sum over k sigma i squared 1 minus eta di 2k U transpose. And this becomes eta squared times U times something like sigma i squared over di U transpose. And it is over eta here, so let's remove the eta here. So you can see that basically you have some noise, and noise is something on this level. So this is the noise level in the i-th eigenvector direction. And noise level depends on-- this is the lab-- maybe let's just be precise. This is the iterate noise here-- iterate stochasticity fluctuation that I will-- because we are competing with the fluctuation of the iterate. And the fluctuation level in the eigenvector direction depends on the noise level in that direction, and also depends on how strong the contraction is. If the contraction is big then you are going to have smaller noise, smaller iterate stochasticity because you have so strong contraction. And it doesn't withstand a lot of noise to build up. And if the noise is big, of course, eventually, you're going to have like a larger fluctuation of iterate, right? And another thing that is useful to realize is that another small remark that is useful is that this matrix U diag sigma i squared di U transpose. This is always in the span of sigma, right? So if sigma has some direction where suppose sigma is lower rank-- capital sigma is lower rank. So in some direction, there's no noise. And then in those direction, xt doesn't have any fluctuation either. So that will be something useful for us in the future. And another thing is that xt-- if you think about what's the rough side of xt, just the norm of xt. This is on order of square root eta because this quantity is something that doesn't depend on eta. So if you want to look at the interdependency, then the norm of the stochasticity or the fluctuation in the iterate will be on all of square root eta. And this is something that's probably good to remember for the moment. It'll be useful for us in the future as well. Any questions so far? [INAUDIBLE] should it be also summed to i? Right. Right. Yeah. But all of those depends on dimension, for example. It depends on how large our sigma i's and how large those di's are. But in terms of dependency on eta, this is on order of square root eta. That's what I mean. [INAUDIBLE] Yeah, yeah. Sure, sure. Yeah. I guess I'm only talking about depends on eta so far. That's like the standard deviation of xt essentially? Sure. Yeah. [INAUDIBLE] square root n. Yes. Well, it's the size of x also takes into account a contraction term. So is this for large t so that the contraction for it turns sufficiently small? Yes. I'm talking about the case where t is infinity. t is going to infinity. So maybe one way to think about this-- I think I kind of sense what your question is. So this is the fluctuation in the [INAUDIBLE] iterate. So in the iterate when t is kind of infinity, it's different from the noise you added at each time. So again, that's actually a very good question. So if you look at the noise that you added at each time-- so this is how large is this. This is on order of eta if you ignore-- of course, you can ignore other dependencies except eta. So each time you add some noise on the order of eta, eventually, all of this noise build up. They got add up together. And they add up to something on order of square root eta. So that's how the noise kind of like accumulate. But it wouldn't accumulate to infinity just because of the contraction, because of this, this term that also contracts the noise to some extent, a little bit. But still the noise build up with one kind of like half order higher in terms of eta, right? So it accumulates from eta to square root eta over time. So, yeah. Order of eta. OK. So we have a pretty kind of good understanding of what's happening basically. Basically, eventually, it's bouncing around with the radius, something like a square root eta in the value of this quadratic. And also, you don't bounce around in those direction where you need to add noise. So that's the [INAUDIBLE]. And now let's look at-- Is there a reason-- can we talk about the idea of noise in this direction back onto minibatch or stochastic gradient descent in a natural way, or is that not the [INAUDIBLE]?? So is there any way for us to map back how-- so you want to connect back to the world where we have the minibatch or gradient descent? So for convex case, it's not that difficult. So what do you do-- basically, what you say is that what is sigma. Sigma is your-- so in our calculations of sigma, in our definition, a sigma is the covariance of the noise in a gradient, right? You can compute what's the covariance of the noise when you use mini-batch gradients. So you can compute that. And that is something that might change over time. But I think you can pretty much say that when you are kind of close to the global minimum, the changes of the covariance of the gradient-- the changes in the covariance of the gradient, of the mini-batch gradient, is negligible. It's even higher long term. You can basically ignore it. So basically, if you want to map this back to the mini-batch gradient, this sigma will just map to the covariance of the mini-batch gradient at the global min theta star. So then you can kind of face everything. But I don't think you get anything super interpretable anyways, so that's why I didn't get into it. [INAUDIBLE] it just seems like, if the global minimum is very flat on some dimension, the variance would have a very large effect. Yes, exactly. Exactly, exactly. That's exactly correct. So this is-- so suppose you have two dimensions. I think this is actually a very good question. So if you have one direction which is like this and suppose you have another direction which is like this. So the question is, how does the noise affect these two dimensions? And also, there's a question about how do you evaluate the impact of the noise? What's the metric you are thinking about? So, so far, I'm thinking about how does the noise change the fluctuation in the iterate, right? So suppose I'm adding noise to the same amount of noise, one unit of noise in both of these cases. I think it's indeed true that stochastic gradient descent itself will kind of fluctuate more in this case. Actually, it probably wouldn't look like this. It'd probably look like something like-- maybe you do some kind of like stochastic things like this. But this is-- you're going to have a larger radius for bouncing around. And here you're going to have a smaller radius. You are going to be more closer to the value. However, even if you have a larger radius here, it doesn't necessarily mean that you have a larger effect on a function value because you fluctuate a lot, but the function is flat, as well. So it's OK to fluctuate more in some cases. So I think let's see whether we can compute the fluctuations. So suppose you have sigma squared over i di squared. This is the radius of the fluctuation. And you multiply-- what do you multiply? You multiply di because di is the curvature of your open function. So this is sigma i squared. This is something that doesn't depend on the curvature. di is the curvature, so this is kind of like x squared, the fluctuation you have. So if you look at the effects to the function value-- and it may not depend on the curvature that much, at least not for the quadratic. Yeah. Right. So that make sense? OK, cool. All right. So now let's talk about nonquadratic function. And then this is kind of where the things become interesting, but it's interesting only on top of what we have discussed. That's why we need to have the warmup. So nonquadratic-- and so far, I'm still doing kind of like-- you can still think of this as a convex function, even one-dimensional convex form so far. I'm going to change that a little bit. And again, for simplicity, let's say, with the loss of generality, let's assume the global minimizer of this g(x) is just the 0 back here right? So we still have 0 as the global minimizer. We are still doing something around 0. And I think I'm using a matrix notation right now here, but I think I realized that in the matrix notation-- oh, I remember. OK. So the reason why I'm using matrix rotation's because I don't have to do the two things there, the scalar case in the matrix case. But for simplicity, you can-- in your mind, you can pretty much interpret all of these as scalars. OK, so I'm also seeing that-- because 0 is the global min, then that means that the gradient at 0 is 0, right? That's a necessary condition. And also, that means that the Hessian g(0) is PSD. OK. And let's also assume-- this is the part where they kind of like become not super rigorous, but we can make this part rigorous. It's just that I wouldn't have time to do all the rigorous stuff. But this part is doable. So suppose we'll assume the iterate are close to 0-- so start from somewhere that's close to 0. And then you can do Taylor expansion around 0. So what you do is you do xt plus 1 is equal to xt minus eta times gradient xt plus ksi t. And you do Taylor expansion to approximate the gradient at ksi t-- at xt. So how do you do a Taylor expansion? So if you take expand at 0, what have you got? You're going to get the nabla g(0) plus nabla squared g(0) times xt minus 0 and plus nabla cube g(0) at xt xt. And maybe let's have also high order terms, which we are going to ignore heuristically, and then we're going to get epsilon t [INAUDIBLE].. So I guess if you're not familiar with the matrixing, then I guess this is really just saying that g prime xt roughly goes to g prime 0 times xt plus g prime 0 times xt squared plus g cubed third order of theta times-- wait, what I'm doing here. So there is no xt here. There's xt. And xt squared plus in high order terms, something like this, right? But I want to use notation that this-- if you do the matrixing that this is the matrix vector product. And this is a tensor vector product. Let me explain that a little bit. So if you do the multidimensional case, this is a third-order tensor of dimensions d by d by d. And suppose you have a t that is a third-order tensor. Then I'm using this t x y, where x and y are all vectors, is defined to be a vector. So this is the multiplication of this tensor with two vectors. First of all, it's a vector. And second, the definition of this is that the i-th part of this is the sum over jk ti jk, xi yj. So xk y-- xj yk. So basically, sum over the remaining part of j and k, and you leave the i alone, left, and that's the outcome. So this is basically the Taylor expansion in multiple dimensions. OK. By the way, just a reminder for the scribe note takers, I think, for this kind of small things I write on the side, please also take notes for those because they are useful for readers, as well. If someone doesn't have time to take the lectures, they read the lecture notes. I think these small explanations are also useful. You can just have a small kind of remark of some writing in the paragraph left bound. So, all right. So we have to do then Taylor expansion. And then we can see that's why-- so we're expecting something somewhat similar to what we had done before, right? And indeed, you will see that, because, A, this is going to be 0 because this is the gradient at 0. And this is 0. So basically, what you can get is that xt minus eta times-- OK, so I guess let me define this for simplicity H to be this like H to be the Hessian at 0. Then you can rewrite this as xt minus eta H xt minus eta ksi t and minus the third term. I guess I'm also going to define-- let me see what's my location here. I define T to be the third other derivative, and then this is T xt xt. And high-order terms, let's ignore that formula from now on. I know we read up on-- we had a formula dealing with eta, but we just have an approximation here. And then this is i minus eta H xt minus eta ksi t minus eta T xt xt. And I think you can see. I guess what I-- I was hoping for you to see is that the third-order term is something new. But this first-order term and second-order term are not new. The noise term and the second-order term are exactly what we had before, right? So if you look at here-- so for the quadratic case, you have extraction, and you have a noise. The extraction is linear, and you have the noise. And now the only difference is that you have additional term from the third-order derivative. And that's expected because if you don't have the third-order, you ignore the third-order term. And it becomes just quadratic. So that's why we wanted to expand out to the third order because we want to really use the fact that this is not a quadratic function. So basically, you can think of this as two process going on, right? So one process is this OU process kind of like-- this kind of like basic one about quadratic. And you have additional term that in fact make a little bit more complicated. Right. And how do we proceed here? So if you really think about-- so in some sense-- so there is one thing-- This is a heuristic derivation. So when in certain cases, you're kind of attempting to even drop the third-order term because maybe you have a small term. And let's try to do that. So just drop the third-order term. Just drop it, all right? So suppose you drop it. Then you have this process, x is updated by something like this. Right? So this is the process. This is something we have analyzed. And we know that, with convergence, xt will be something on the order third eta. So here I'm ignoring all the dependencies, except the dependency on eta. And now, if you look back on what happened with this third-order term, so when xt is on this order. So eta T xt xt. What is this? This is on order of eta squared because xt contributes square root eta. This actually contributes square eta, and there's an eta here. So basically, we have an eta square term which sounds very small. Why this is very small? This is much smaller. So eta squared is much, much smaller than, for example, eta ksi t, right? But that probably is not unfair because ksi t is doing some random stuff. But eta squared is also much, much smaller than even just eta H xt, which is on the order of eta to the 1.5. So basically, the changes of your process where the two other changes of your process-- is these two terms. Right? And this term you can say comparing with that is a little bit like unfair because that term is doing some random stuff, right? So maybe you shouldn't compare it with the absolute value of it just because eventually there will be some cancellation. But at least you can pair it with the other deterministic term eta H xt. You are still-- this eta squared term is still much smaller than the deterministic term. So in some sense, it's very tempting to say that OK, this third order term is tx xt xt thing A is very small. Right? So the conclusion would be that-- the conclusion is that this is kind of negligible. And indeed, it's true when-- and this is indeed true. This is negligible. And indeed, it is true under one condition. When the H, the Hessian, is strictly PSD. So that's when you have contraction in all different directions. So however, when H is not strictly PSD-- so for example, in some direction-- so basically, in other words, if you think about this-- so this term is only on order of eta to 1.5, where H is not 0, right? So if H is 0 in some direction, then this eta H t term is just literally 0. So then the eta squared term is winning, right? So basically in some direction where H is 0 then eta H xt is just a 0 in that direction. And then eta squared becomes the largest update. [INAUDIBLE] Eta ksi t is always the largest if you really kind of look at the absolute value, right? So eta ksi t is on the order of what? This is on the order of just the eta. It's always the largest, but I think I-- I kind of-- I try to-- I'm trying to argue that eta ksi t if you compare with that, it's a little bit kind of like misleading in the sense that eta ksi t is doing random stuff, right? So in one step, it's going positive direction. The other step, it's going to negative direction. So eventually basically, what happens is that if you have a random stochastic term-- suppose you have a stochastic term or min 0 term, such that, if you have one step, it's on the order of eta implies eventually like [INAUDIBLE]-- this will be something like on the square root of eta. That's kind of like what we have discussed in the quadratic case right? If every step is stochastic term, it's going to give you a eta noise perturbation. And then, eventually, they will build up to square root eta. However, when you have a deterministic term-- so if one step is something like eta, then eventually it's unclear what will build up. It probably would build up to eta T because eta times little t is a-- It won't kind of like-- they won't cancel. Maybe this is a little bit-- I'm not sure whether this is the best way to explain it, although I say this. So another way, this is a heuristic because it requires a little bit more-- if you want to formalize all of this, it requires a little more work. But I guess what I'm-- maybe what I'm saying is basically like the low equality. The largest update. But in all cases, by locally the oldest update of course, is eta ksi t. But this one will have cancellation over time, because in future, you're going to have like a-- you're going to move in different directions. So that's why it's probably good to also compare with the deterministic changes, which is the eta H xt. And then when you compare it with that, typically the deterministic changes, is bigger than the eta squared term from the third-order derivative. But when H is 0 in some direction, then it's no longer achievable. But you can-- sometimes you can prove that if H is-- so when H is strictly positive, it's nonzero, then it's negligible. And otherwise, it becomes trickier. So if H has a completely flat direction, it becomes tricky. So I think here is a good point. Maybe let's just continue with this. So when this is the case, then-- so the third-- so in this case, the third-order term will introduce some biases but very small, some bias but very small. And small in the sense that, as eta goes to 0, this becomes negligible. And I think I have some kind of like figures here. And so I have this figure. Let's see whether you can see it. Yes. So this is a little bit small here. Maybe this way. So the function is a one-dimensional function, is a complex one. So I'm in the case where the H is strictly bigger than 0, so H. Because it's one-dimensional, this is strictly convex function. So this is the function, but it's not like a quadratic. It's something like-- I think it's quadratic on both sides, but it is not the same kind of curvature. The left-hand side is more flat, and the right-hand side is more sharp. And if you do gradient descent, so I guess probably the only thing important is this. So if you look at-- this is after you take 100 to 1,000 steps of stochastic gradient descent. And you can see that the mean, the iterate is bouncing around between. This is the distribution of the iterate, distribution of the xt when t is 1024. So 1024 is pretty big, considered to be infinity, right? And you can see that it's bouncing around 0. 0 is the global minimum. But the mean is no longer 0 anymore because you have the third-order kind of like derivative. And the mean is something left to 0. In some sense, you prefer the left-hand side a little bit more than the right-hand side because it's easier to stay on the left-hand side. The left-hand side is lighter, so it's easier to stay there because the contraction is weaker. And the right-hand side is sharper and is-- you add some noise, and you kind of contract it. And you go back to the 0 quickly, more quickly. So that's why the bias is towards that left-hand side, where you have lighter curvature. But the bias is relatively small. You can see like-- you can even say this is negligible because at least you know, if you do take a random point, you're going to take something between minus 0.05 to 0.05 maybe. And the bias is only a very small number. Anyway, you are-- your step-- your fluctuation is bigger than the bias, also. So that's why in the classical kind of like in the classical kind of like optimization settings people didn't really pay too much attention to this. I think they are-- some papers I guess-- so I guess there is this paper by Bach 17 bridging the gap I guess between constant step size SGD. Any more questions? So this paper characterized this effect for convex case. And you can see from the title of this paper, it's talking about constant step size. And why you have to talk about constant step size just because this will go-- if you decay those steps size, then this bias effect will be even smaller. And it will be negligible, just completely gone eventually. So that's why you have to make this even somewhat useful, somewhat can make a difference you have to make the stuff that's not going to 0. That's why in the combat phase, people typically don't care about this that much. In some other cases, you care about this a little bit. This figure is from one of my recent papers with some students at Stanford. Hi, guys. So and here the reason why we talked about this is because you have multiple machines. And for some other reasons, you have to care about it. But typically, you wouldn't really care that much about it. It's just because the bias is small. OK. So now let's go back to-- now let's move on to-- OK. Finally, we are moving to the place of regularization impact. So the more complex case. I'm writing too fast. Too cursory, I guess. With stronger implicit reg. And these are cases where both H and sigma are not full rank. So your Hessian and your noise are both somewhat not four-dimensional. And this is not something to be super surprised. This is called part that comes from overparameterization. Especially, I think, it's easier to think about the Hessian. If you have a manifold of global minimum, then along that-- the direction of the manifold your Hessian will be 0. So I thought you have a lot of different global minimum. Then your Hessian will be flat. It will be 0 in certain directions. And let's suppose-- so for simplicity, I may not discuss when this can happen exactly because you need some calculations so and so forth. But suppose when I say H and sigma are both in a subspace K and the subspace K is low-dimensional or is not full-dimensional and if the loss is quadratic-- for the moment, let's still think about the loss as quadratic. I guess we have to conclude this. We said that the iterate will have 0, something like this. Recall that this is our calculation, sigma squared di U transpose. And so kind of the picture, I think, is that there is no-- so basically, you have no noise and no contraction, nothing in the perpendicular space of k. So in some sense, I think the function look like this. So suppose you have some direction of K. This is the direction of K, and this is the direction of K perp. And suppose your function is quadratic in dimension of K, something like this. I'm not sure whether you can-- I think my drawing is too bad. So imagine a valley. This is a-- I'm drawing a valley like this. But this valley is completely kind of like oblivious to dimension of k per. So this thing is the middle of the valley. This Is the middle of the valley. So basically then what happens is that if you start somewhere here, everything happens in the direction of K, and nothing happens in the direction of k per. So you're basically bouncing around the direction of k. So basically, you maybe go here, here, and go about to do some bouncing like something like this. But you never move in the direction of k per. So in k per, it's kind of like you just know nothing. You know nothing, or you don't move at all. So let's do not kind of like-- that's a little bit implicit bias in place of requisition because the implicit requisition comes from what? Comes from the transition. If you start with this point, then you're going to stay in this part. But if you start here, then you're going to bounce around here, right? And this is exactly what happens when you have overparameterized linear model, because when you have overparameterized linear model, you never leave the subspace. It may never leave some subspace. And in other subspace, you'll never move. So this is not the most important thing about noise because noise doesn't really do much. It's really just that you cannot leave a certain subspace. However, when your loss is quadratic, when your loss is not quadratic, then the third-order term is going to matter. So this is the main thing that I want to kind of like commit today, but, unfortunately, just because this is complicated. So I probably wouldn't be able to do everything rigorously. So I just really can't do everything rigorously. So what happens is that if the loss is not quadratic, then-- recall that what happens is that you have xt plus 1 is equal to 1 minus eta H xt minus eta ksi t minus eta T xt xt plus high-order term. And this is happening. So this is working in K because I assume that H is working in K, and the noise is always in K in a subspace. So this left part is we're always working in K. You are bouncing around in K. And this is working in K perp. And that makes them kind of complete separate, so there's nothing you can control the third-order term. The third-order term can build up for a long and long-- very long time. So maybe this is the one. Let me see. So basically, let's see. I probably will go to this figure multiple times. Right. So this is what's happening here. So I don't think I can-- I don't think I can draw anything here. But maybe first watch it. And then I'm going to go to a static figure so that you can-- I can annotate. So this is a stochastic gradient descent in this valley. And you can see that it's moving in this valley. So now let's look at the static figure. I have one somewhere. So in our mathematical language-- so this direction-- let's see a different color. So this is the direction of the K perp, and this is the direction of K. So this direction is K. OK? But here this is not a quadratic because-- at least this is-- it is not a quadratic because your-- at least the K perp direction doesn't matter to some extent. Because the K perp-- you can see that if you go from here to here, you're going to go to flatter and flatter region, right? So what happens is that most of your work is in the K direction. You are just bouncing around in the K direction. But there is some certain term that drives you in the K per direction. And that can build up eventually for a long time. Recall that you start from here. You do a lot of bouncing around. But eventually, after you bounce for a so long time, you move in the K per direction. And this is because the third-order term is accumulating for a long time until you go to the flatter region. So the min term is doing the bouncing, and the lower term is accumulating in the direction of the valley. Any questions so far? I'll go back to this bigger problem once again just once I do a little bit more math. If we sort of know that this is happening, do we want to do this first [INAUDIBLE] direction and then-- is that a feasible thing you can do with it? Yeah, that's a good question. So if you know this is what's happening. Why not just do something more explicit to make it faster, right? I think there are several-- let me see. So there are several things that-- it's a good question, but this is not something super new. People have thought about it, and I have thought about it. I think there are multiple constraints we have to kind of respect. I still think this is a feasible thing, direction to go, but it's not easy. And I don't think there's an existing paper that can really achieve this very well. So one of the thing is that-- so how do you go to the valley? So what you're going to do? You want to go to the valley. You want to compute the direction of K perp, and you go there, right? So how do you go to the valley? I think that's not too hard because you have to use but not trivial. Because to go to the valley, you have to either decay on your rate or make your batch size bigger so that you have smaller norms. But that requires more compute because you want to be more accurate. Sometimes you want to be more accurate in a K direction, so it requires more compute. So that's one small thing, right? So whether you really can afford to compute to really go to the valley in the first place. I think you can probably. In most cases, you can. But there's not like a-- for free, so you do have to consider the cost. And then you go to the valley, and then you do the-- you do this, right? You move in a K perp direction. But the problem becomes that the real picture is not just one single-- this is only a local view. So once you go to here maybe-- if locally, it sounds great. I'm going to a better place. But maybe there is-- actually, this function has a lot of other parts. So actually, I have to travel really far, far, far away, somewhere else. So then you have to do this again, this local view again, and then try to do it and so on and so forth, right? So then it becomes a-- then you have to also find a new valley and then probably find the direction of the K perp. And also finding the direction of K perp is also not mentioned because it requires completing a third-order derivative. Continuous third-order derivative on one example is still OK. It cost you-- computing high-order derivative on one example takes you a constant factor more than computing the first-order derivative. This is a very interesting thing about deep learning. So computing any derivative give you almost-- requires almost the same time as computing the first derivative as long as your output is a vector. But I do-- you do have to pay a constant factor, something like two or three times more compute. And also, you have to do this for-- and this T, this K per direction, to get it exactly, you also have to do a full batch, full [INAUDIBLE] so that the third-order term is the third-order derivative of the full function, of the population, of the population function, of the full empirical function. So if you use this minimax thing, then maybe you wouldn't get the K perp direction very accurately. So there's a bunch of decisions which makes it complicated. We don't even know exactly which one is the bottleneck, so it's a little bit tricky. So whoo, but that's a great question. Yeah. So we tried very hard to somehow do this for quite a while already. Yeah. So OK, all right. So now let's see. So I think I would do a little more math just to kind of give you a small feeling about how we perceive to analyze this. And the way to analyze this is that you somehow view this, as I said, two things. So you first define a competitive process, which is easier to define, Ut plus 1 to be 1 minus eta H Ut minus eta ksi t. And this is where it's understood because basically you are doing optimization on the quadratic approximation. And we have done this already. And then you characterize the difference between them. So xt minus Ut. We define this to be rt. So basically, the main question is what rt is doing. And we get to take a-- we can compute the recursion of rt. Right. This is equals to you plug in a definition of xt plus and ut plus 1 you get 1 minus eta H xt minus Ut minus eta T xt xt. That's high-order term. And then still-- and then it's 1 minus eta H rt minus eta T xt xt. And the interesting theory is that you still have the contraction. And this is the bias or the regularization effect, but there's no noise anymore. There's no stochasticity, no stochasticity. There are still a little bit of stochasticity in xt, but at least you don't have the ksi t term that you have added intentionally. Just because you are taking a diff with the stochastic trajectory. And you can actually move the xt as well because you can basically claim that this is close to the version where you plug in. You're not plugging xt. You're plugging Ut. So this is just because xt and Ut are somewhat similar. Of course, you want to understand the exact difference. But for this level, especially because you are operating on eta here-- so the further their differences become so you have a higher order term you can jump. So for now, what happens is that if you look at the diff, if you look at the inner subspace, the subspace of K, which is the span of H, this is still contraction because you have some additional biases. But the biases will be corrected by the contraction eventually. However, for the inner K perp subspace, the contraction is gone. You project everything to the K per subspace. Then you got-- all right. So the H doesn't have any effect anymore because H has nothing to do with the K per direction. It's just the projected outcome. So now, the thing is really simple. So in the K per subspace, you are just basically taking the previous rt, projecting the current pair plus something new. So basically, you are just the [INAUDIBLE].. You don't have any contraction even. So if you do this recursively, you are going to get the K per of r0 minus the sum of-- but now it becomes-- the question becomes, how do you understand the sum of the third-order term. And by the way, I've never told you where the third-order term is going. I only claim that there is a third-order term. I didn't really say where it's going. So now the question is, what is third-order terms are going in average, right? So in the long run, over time. So we can kind of ignore this. This is just a restriction to the subspace. So if you look at sum of those-- where the third-order term is going. So from this, you can-- so first of all, let's assume maybe this is a heuristic. Let's say this is a Ut. Let's say s is something like UK. UK transpose is the covariance of UK. This is as K goes to infinity and also assume this UK mixes as a Markov chain. I'm not sure whether you are familiar with this maximum chain mixing, but you just assume that UK is kind of like doing the-- UK is doing the bouncing around. You assume that it's really just doing that. It's kind of like a Gaussian. And S is the covariance of the Gaussian. And then this one, you can rewrite this as T of-- in some sense, this is like-- it goes to little t times-- roughly equals little t times the T with the expectation of u and UK transpose-- or maybe with s. So I guess maybe-- so what I'm doing here is that, suppose you have some variable U that is drawn from s, from Gaussian with covariance S, then the expectation of T u u is equal to T of-- this is expectation-- I guess let's look at i-th coordinate. And this is sum of the Tijk, Uj, Uk and you look at i-th coordinate-- the i-th for this. And then you switch the sum with the expectation. You have j k Tijk expectation Uj Uk. And this is sum over Jk Tijk expectation u, u transpose. You take the jk coordinate. And this is-- If you know this by T of expectation u, u transpose. So you can also apply the tensor on a matrix, and the definition is really just this. So the definition of the matrix is that you have some of Tijk Sjk. Anyway, this just might be a little bit too much for this course. But anyway, you can basically identify what you have is that you use-- I guess there's an eta here. My bad. So this t comes from you have multiple times where you have t steps. That's where you got t. And eta is what you got from this eta. And this is something like you apply the tensor to the average covariance, the mixed covariance of t. So basically, the question becomes, what is x? If you know x, then you know which direction you're going. And you know how far you are going. You are going by t times this direction because you take t steps. So the next sort of-- so the final question i what this TS is. And this is very informal and not even exactly correct. And to fix it, you need something a little bit more [INAUDIBLE]. So this biased direction T of S minus T of S is the biased direction. So this batch direction is equal to minus-- I guess remember what's the t t is the third-order derivative. And S is the iterate of the-- no, S is just some matrix for the moment. And you can rewrite this. So if you think about what is this, this is the gradient of the inner product of the Hessian and S. So this is an equation. And in some sense, you can argue that this is a-- heuristic argument. And actually, it's not even correct, not even a 100% correct argument. So the T(S) is trying to make nabla squared g(x) times S, smaller, because you are moving the gradient of that function, right? So let's define this to be R(x). Right. So you are moving in that negative R(x), negative nabla R(x) direction. That's why you can argue that you are trying to make that function smaller. So the additional bias is trying to make-- this minus T(S) term is trying to make the R(x) smaller by moving in a gradient of this R(x) direction. And eventually, I think if you work out all of this kind of like subtle details with a lot of other like stance, and fixes, and assumptions. And I think I've not time to go through all of this, as we already are running late. But you can somewhat prove in certain cases-- let me just write down what formula you can prove. So you can prove something like SGD with the so-called label noise. I didn't tell you what label noise mean. It doesn't matter. It's one kind of noise, and it's not exactly the min. it's just some additional noise. And convert this to a stationary point of the regularized loss. I plus lambda R-- l hat plus lambda R, where R theta is equal to-- roughly equal to trace of the Hessian of the loss. Yeah, so I guess there's no need to understand any details here. There are some other subtleties. There are other assumptions, so and so forth. I just want to give you a taste on what kind of theorems you may hope to prove. So basically, you are saying that, if you run SGD, OK, this is on the original loss, L hat. So if you wear a certain kind of SGD on the unregularized loss. It converges to the stationary point of a regularized loss. So that's why you get this regularizer for free. And what regularizer it is-- here, the regularizer is the trace of the Hessian is something about the flatness of the loss L hat, right? The Hessian is the curvature. The trace of the Hessian is about the flatness at that point. So you are implicitly encouraging the flatness of the loss function. But this has a lot of things hidden here. And actually, I think I'm missing a few kind of important question, important assumptions. I'm not writing down some of the important assumptions just because they are not-- it takes too much time to write It down. But this is kind of like something we may hope to prove in some other cases. OK, any questions? [INAUDIBLE] That's a great point. So the question-- just to rephrase the question. The question is that whether the even high-order derivative, the fourth-order gradient would increase the bounds. I think on the conceptual level, if your third-order thing is not 0, then I think the fourth-order one wouldn't matter that much. And if the third-order term is 0, I think, indeed, there should be a fourth-order-- the fourth-order term would have an effect. But so far we're not thinking about that. We are assuming the third-order term is doing something non-trivial. so that the fourth-order term will be dominated by the third-order term. I see. It seems like [INAUDIBLE]. In the last theorem, the stationary point is stochastic. So regular [INAUDIBLE] Hessian instead of the [INAUDIBLE].. Oh, I see. Yeah. So the question is, why the regularization is the trace of Hessian. This is because when a regulator is over the second-order term, the second derivative, then the direction you want to move to is the gradient of the regularizer. So when you have a regularizer, what do they really mean? It means that you should move to the direction of the grid of the regularizer. That's how they match up. So actually, the direction you really move to is the third-order derivative, depends on the third-order derivative of the loss function. And then the second-- so the Hessian becomes the derivation and then the corresponding term [INAUDIBLE]. So I guess there are two views. One view is that you look at it on the regularizer level. Then currently it's the second-order term, the second-order derivative of the loss. And another view is that you look at the actual, the iterate space. The current is the third-order derivative, it's about third-order derivative of the loss. And supposing that you're in the iterate space, the third-order derivative manages. Then you have to talk about the fourth-order derivative of the loss. And in that case, the regularizer probably will be above a third-order derivative of the loss. It's because your regularizer is always one order up compared to the direction you move to. This makes sense now? [INAUDIBLE] the SGD [INAUDIBLE]. So what's special about that? [INAUDIBLE] Yeah. So why a flat stationary point is better? Right. So I think-- [INAUDIBLE] Why do we spend or not? So I think I'm going to talk about that immediately next in the beginning of the next lecture. And the answer is that we do believe-- is generally [INAUDIBLE]. It depends on some-- it kind of relates to the Lipschitzness of the models-- I'll discuss more next week-- on Wednesday. OK, bye. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_2_Asymptotic_analysis_uniform_convergence_Hoeffding_inequality.txt | OK, cool. Let's get started. OK, so it's kind of complicated, right? It's kind of amazing, right? This technology is so advanced. So you can do all of these things together. But I still have to do them one by one. I have 10 action items-- maybe more than 10. I need to also connect with Wi-Fi. That's actually something I have to do. OK, but oh, let's get started. Oh, I need to have my notes. So what we're going to do today is that we are going to continue with the asymptotics last time a little bit for about 15 to 20 minutes. This is just to wrap up what we have discussed. And as I said, this first lecture is always kind of a little bit tricky for me to teach it, because the tools-- if you want to make it formal, it requires some kind of backgrounds. And if you don't want to make it formal, sometimes, there is a lot of confusion. So from this lecture, the second half of this lecture, I think we are going to talk about things that require less background, in some sense, and more self contained. OK, so the plan is the asymptotics, and then the so-called uniform convergence. I'll define what it is. And uniform convergence will be the main focus for the first few weeks of the lecture. OK, so let's start by reviewing what we have done last time. So what we have-- the last time was this theorem, where we showed that if you assume consistency, which is something that we basically just assume without much justification. It's not always true. And it also depends on the problem. The consistency basically means that theta hat will converge to theta star. Recall that theta hat is the ERM, the Empirical Risk Minimizer. And theta star is the minimizer of the population risk. So you care about recovering theta star or recovering something as good as theta star. And we also assume a bunch of other things, like for example, the Hessian is full rank and also some regularity conditions, which I didn't even define exactly. For example, this requires something like some of the variance is finite, and so you can apply the theorems. And then, under these assumptions, we have that-- actually, it's challenging for me, because this podium operated-- it becomes unstable. It's like I feel like I'm writing while I'm on a boat. [LAUGHTER] But it's probably good for me to practice. What is this called? I would be better with some of the sports guys after we do this. Anyway, OK. So I guess we have discussed that you know the order of the difference between, say, theta hat and theta star. The order is on the order of 1 over square root of n. And formally, you write it like this. You scale back square root of n, and you know that it's on the order of 1. And you also know something about the loss. You know that the excess risk, L theta hat minus L theta star, is on the order of 1 over n. And if you formally write it like this, you scale back towards n. And then, you say it's on order of constant. And also, you know that the distribution of theta hat minus theta star-- this is converging to a Gaussian distribution, which means 0 and some covariance. And this covariance is complicated, but let me write it something like this. This is just revealing what we have written last time. And four, we also the dissolution of the excess risk. This is the distribution of a scalar, because the excess risk is a scalar. If you scale it by n, then you know the distribution is converging to the distribution of this random variable. And this random variable S is a Gaussian random variable with covariance mean 0. And covariance-- something above but not exactly. You don't have to remember exactly what the covariance here is, because I don't even remember them if I don't read my notes. There are some intuitions about this, which I'm going to discuss. But generally, this is just something you got from derivations. So last time, we have kind of roughly justified the number 1, number 2. And today, I'm going to, again, give a relatively heuristic proof for 3 and 4 just very quick so that we can wrap up this. So I guess just to very quickly review what we have done last time, so the key idea to derive all of this is by doing Taylor expansion. And Taylor expansion-- I think the key equation-- let me just rewrite it, what we have done last time-- is this. So you look at the theta hat. The gradient of the empirical loss S theta hat. This is guaranteed to be 0, because theta hat is the minimizer of the empirical loss. And you Taylor expand this around theta star. And you get something like this plus higher order terms. And then, you rearrange this and get theta hat minus theta star is equal to the inverse of the empirical Hessian at theta star times nabla L hat theta hat plus higher order terms. And then, you say that I'm going to replace all the hats by L, like L hat by L, using some kind of large number of uniform convergence. And last time, we have roughly discussed that this is on the order of 1 over square root of N, because you have a concentration. This is the average of-- this is roughly on the order of 1 squared plus nabla L theta. This is theta star. Now, theta star, which is roughly on the other 1 over square root of n-- and this one is converting to a constant. So that's why the whole thing is converging to something on the order of 1 over square root of n. And this time, we are going to make it a little more formal. So we've get the exact distribution of theta hat minus theta star. I'll make this part really quick just so that if you are not familiar with the background, you don't get confused too much. So the idea is that-- so if you look at what's the distribution of this, if you think about this, this is the product of two random variables. And you roughly know what the distribution of each of the random variables is, right? So this one is going to converge to a constant, which is going to converge to a nabla L theta star inverse. And this one is going to be a Gaussian distribution if you scale it correctly, right? And basically, what you need to know is that what's the product of these two random variables. What's the distribution of the product of two random variables? When you know each of them, what happens with each of them? And what happens is from formula, what you do is you first scale by square root of n so that each of these two random variables are on the order of 1 so that you can reason about them easily. So you scale by square root of n, you get this. And then, you have inverse. And now, you scale this empirical grid by square root of n. And also, you get the square root of n. And also, let's fill in the population gradient, which is 0. So this one is 0. I just read it here to make it closer to something you know. And then, this plus higher order terms. This is still higher order terms, even if you multiply by square root of N, because I think there's a typo in the lecture notes somebody pointed out, which is really nice. But still, no matter how you multiply, it's still higher order terms compared to the other terms, right? So and now, this one-- let's call it Z. This Z, by law of large number, or I think, by central limit theorem-- Z is a Gaussian distribution and with some covariance. And what's the covariance? The covariance will be the covariance of nabla L xy theta. Why? This is just because-- what is l hat theta star minus this? This is really just this the empirical version of the right-hand side, the population gradient. So this is really 1 over N times sum of nabla l xi yi theta star minus expectation of this, right? Of the same thing-- maybe you can-- for simplicity, let's just write xy theta star, all right? So when you apply central limit theorem, you know that if you scale this by square root of N, then you get a Gaussian distribution, right? So that's why we know the random variable Z has Gaussian distribution. And we know this one will convert to a constant as n goes to infinity. And there is a theorem that specifically deals with this. But actually, if you think about it, this makes a lot of sense. So if you want to know what's the left-hand side, basically, it just becomes the distribution of the right-hand side. It's a constant times a Gaussian distribution. It's this constant times the Gaussian distribution. So basically, we have to figure out, what's the distribution here? So what is the distribution of a constant times Z? So basically, abstractly speaking, what we are dealing with here is that-- so the question we're dealing with here is that-- so a different color for abstraction. So basically, you're asking, what is the distribution of A times Z if A is a constant and Z is from some Gaussian distribution with covariant sigma? All right. And I'm missing a page. And you know that there is a lemma, which says that under this case, A and Z is also Gaussian distribution with covariance with mean 0 and covariance A sigma A transpose. I think this is a homework question-- homework 0 question. I'm not sure whether it's still there. I forgot to double-check. But this is something you can do-- what's a transformation of a Gaussian distribution? Still Gaussian distribution, it's just the covariance got transformed. And actually, the way to transform the covariance is that you left multiply the transformation. And you right multiply the transpose of it. And you get a new covariance. So this is something-- it's not that simple to derive this, but this is something you can either look up from a book, or you can derive it yourself. All right. So with this small lemma, then we know that the distribution of theta hat minus theta star converges to-- you place the convergence to a Gaussian distribution with mean 0. So here, A corresponds to the nabla l theta star, right? And sigma corresponds to this one. And you just plug in these two choices. Then, what you have is this-- basically, what we intended to prove. We got nabla l, nabla square l theta star minus 1 times covariance of nabla l xy theta star times nabla l theta star. OK? This is convergence intuition. Any questions so far? I realize that my camera is frozen. I don't know why. Something seems to be wrong. For those people who are on Zoom meeting, can you see my video? It's frozen. I see. Thanks. Maybe let me turn it off, and then turn it on. OK, so it's working now? OK, cool. And you can see that hat? You can see everything? OK, thanks. OK, cool. Any questions? Also, if you are in a Zoom meeting, also feel free to just unmute and ask any questions. So at the end of the-- covariance at the end, if that's the Hessian variance through its negative [INAUDIBLE]?? This is inverse. Yeah, the covariant. Sorry, which term are you asking? This one? The one next to it. Here? The one to the right, yeah. Yeah, this is the same-- That's the exact same one? Yes, it's exactly the same one. It's supposed to be the same thing transposed, right? But this is a symmetric matrix. So the transpose is the same as L, so this is minus 1. OK, so I guess what I'm going to do is I'm going to skip the proof for the derivation for the number 4. It's kind of the same thing. It's just that you have to-- because you already know the distribution of theta hat, you should know the distribution of L theta hat. And what you do is you do some Taylor expansion to make it a polynomial of theta hat. And then, you can use what you know about theta hat. All of this is in lecture notes. I guess I'm going to skip this part. So if we wrote it and it looks like-- for example, like the [INAUDIBLE],, is there a reason for that? You mean the covariance seems to like the new-- It's like, instead of the gradient direction module, [INAUDIBLE] about this. Is there a connection between the two? I think there's a connection, but I don't feel like it's-- this Hessian shows up very often in many different cases, right? So there is some connection, but I don't feel like it has to be-- it's not super closely related so that it's important enough to know, yeah. Yeah, OK. So I guess I'll skip the proof for number 4. If you're interested, you can look at the proof in the lecture notes. And what I'm going to do is that I'm going to spend another 5 to 10 minutes to talk about a corollary of this theorem, which is in maybe a more typical setting. Like here, this theorem is very general. Because it doesn't say anything about the loss function. It doesn't say anything about the model. It works for almost everything as long as you have the consistency. And here, let me instantiate this theorem for the so-called well-specified case, where you use log likelihood. And then, we can see all of this covariance become a little bit more intuitive. And things become a little bit easier. So this is the so-called well-specified case. So I guess in addition to theorem 1, let's also assume that-- let's suppose there exists some probabilistic model parameterized by theta such that y is given x and theta. So you assume that y is generated from this probabilistic model, right? So what does it mean? So basically, it mean, so let's say, suppose there exists a theta star. I'm using the subscript here to differentiate from the theta star defined before, which was the minimizer of the population risk. And actually, they are the same. But for now, they are the difference. So basically, you assume that there exists a theta star such that the yi, the data, is generated from-- conditional xi is generated from this probabilistic model. All right. So assume-- so this is why it's called well-specified. It means that your data is generated from some probabilistic model. And also, in this case, suppose you use the loss function is the log likelihood. Right? Before, we didn't really say what the loss function needs to be. It could be anything. And now, let's say the loss function is the log likelihood of this probabilistic model. Think of this as, for example, logistic regression, right? Or linear regression with Gaussian noise. So your log likelihood could be cross entropy loss. Could be mean square loss depending on what the probabilistic model you have. All right, so this is your loss function. And when you do this, then a you know a of things which are nicer, in some sense. So first of all, you know that the theta star is equal to the theta substar, right? So recall that this is the minimizer of the population loss. And this is the ground truth. This is the one that generates our data. And in this case, you can prove that when you have infinite data where theta star is the minimize of the infinite data case. You can recover the ground truth-- theta substar. So they are exactly the same thing. And you also know a bunch of other things. For example, you know that the gradient-- this is kind of trivial. I'm just writing it here because it used to be that I needed prove this in a proof. But if you don't care about the proof, this is just an intermediate step that you know. So you know that expected gradient over the population at theta star is 0. And also, you know what's the covariance of the gradient. The covariance of the gradient is the quantity that we care about, right? Because in the previous theorem, the covariance of the gradient shows up in the variance of theta hat minus theta star. So the covariance of the gradient is x theta star. I guess, from now on, we don't distinguish theta superstar and theta supersuperstar, and theta subscript star, because they are the same. And you know the covariance actually happens to be the Hessian. And where the covariance of the gradient happens to be the Hessian, then the covariance of theta hat minus theta star can be simplified. So because this used to be a Gaussian distribution with something like this, right? The covariance of the theta hat minus-- it used to be this product of three things, three matrices. But now, what's in the middle is the same as the Hessian. That's what we claimed in number 3. So that means that you can cancel this with this. And you get only one term. So what's left is just the inverse of the Hessian. Maybe I should just use black forever. Yeah, and you also know if you plug in this, the covariance of the gradient, you basically plug in 3 into all the statements that you had before. Then, you can also get something like, for example-- well, the important thing is this-- the excess risk. I guess we have claimed that it's on the order of theta star. But actually, here, you can be more precise. You know that this is converging to basically 1/2 times chi square distribution with degree p. So p is a dimension of theta. So suppose you have p parameters. Then, this is the distribution of the excess risk. And if you take the expectation of this so that you get all the randomness, then what you get is the expectation of n times excess risk is equal to the expectation of the chi square distribution. This is equal to 1/2 times p. By the way, chi square distribution-- you don't have to know anything detailed about it. This is basically the distribution of a sum of p normal on Gaussian square. So you know a lot of things about it. You know it's positive, and you know that the chi square with p, the mean is-- if you need to know more about this, just Wikipedia. It's very easy. We don't need anything deep about it. So the important thing is the last equation. So basically, we know that the excess risk and expectation-- here, the expectation is over the randomness of the data set, right? So excess risk-- if you don't scale by theta star-- sorry, if you don't scale it by n, then you get-- this is equal to-- I guess I should write convergent to, because it wouldn't be exactly equal. This is 1/2 times p over n. So basically, you don't even get the dependency on n. But you also get a dependency on p-- on a dimension. So you know what's the order of the excess risk. Of course, these are higher order terms-- theta of 1 over N. And actually, you know the variance of excess risk, which I don't think is super important. The variance is smaller than the mean. OK, so in the lecture notes, I think we have proofs for all of this. But I think I'm not going to discuss the proof. The most important thing, I think, is this one and this one. So the first thing is saying that the shape of theta hat minus theta star, the randomness-- the shape is the same as kind of the inverse of the Hessian. So in those directions where your Hessian is steeper, then you have less stochasticity, right? And in those directions where the Hessian is smaller, then you have more stochasticity. And the last one is saying that it doesn't matter what the Hessian is. The only thing that matters is the number of parameters. If you care about this kind of asymptotic regime, the only thing that matters is the parameter p, the number of parameters. We're going to discuss the limitation of all of these theorems in a moment. But this is what we got from this asymptotic approach. Any questions so far? OK, cool. So I guess if you're interested in more details, you can take a look at the lecture notes. So I guess now, let's move on to uniform convergence. And often, people call this line of research nonasymptotic So let's first discuss that. This is actually the kind of the approach that we're going to take for the rest of the lecture. We are going to care about nonasymptotics instead of asymptotic ones. So let me define what it is and motivate why we care about it. So recall that when you have asymptotic bounds, just like what I wrote above, you know that this L theta hat minus L theta star-- the final outcome is something like, this is equal to p over 2n plus theta of 1 over n. However, the problem is that here, you are hiding a lot of things in this little o of theta of 1, little o of 1 over n. So you hide all dependencies other than the dependencies on n, other than n. So what does it mean? So it means little o notation-- you also hide a dependency on p. So if you tell me in asymptotic regime, you get this bound, what happens is that you could either have p over 2n plus 1 over n squared. Maybe the real rate is this. It could also happen that the real rate could be this. So both of these two cases would be a possible situation if you tell me the bond above, right? I wouldn't have ways to distinguish this, because this one is hidden in this little o notation. Because the little o notation doesn't care about any dependencies or anything else. It only cares about the dependencies on 1 over N, at least in the context of asymptotics. So this is the problem, because clearly, if your rate is something on the right-hand side, then this is very bad rate, right? Very bad. By the way, by rate, I mean how does this depend on-- I guess maybe let's just call it bound, right? So suppose your bound is on the right-hand side. Then, it's a very bad bound, because this requires n to be bigger than p to the 50 so that this bound is smaller than 1, right? Because you need a second term to be smaller than 1. Then, you need n to be bigger than p to the 50. So just [INAUDIBLE] definition of little o of 1 over n. Does that mean that n times the function goes to 0 as n goes to infinity? Yes, yes. Yeah. Exactly. So OK, I'm going back to this. So the bound on the right-hand side is going to be very bad. And the bound on the left-hand side-- this one-- is pretty good in some sense, right? But you have no way to distinguish them, because these two things would be coming towards p over 2n plus little o 1 over n in the asymptotic sense. So that's the biggest problem. And also, in some sense, when you have other dependencies-- for example, the dimensionality-- even the dependencies on n is not the only thing that matters. For example, another more extreme situation is that suppose you compare p over square root of n versus p over 2n plus this. Right? Suppose you have two of these bounds. And if you use asymptotics, if you write in the asymptotic way, then you are going to conclude that this is p over 2n plus little o of 1 over n in the asymptotic language. And this one will be something like p over square root of n plus little o of 1 over square root of n. So it sounds like this is bad, because this one has higher order dependencies on n, right? Indeed, too, when n goes to infinity, the right-hand side is smaller is than the left-hand side. But if you think about a more moderate regime of n, then it's not really true. Because for the bound to be less than 1-- so if you want p over square root of n to be less than 1, this means that N is bigger than p squared. But if you want this p over 2n plus p of 100 and square root to be less than 1, this means n needs to be at least larger than p to the 50, right? So when N goes to infinity, then the left-hand side-- it's worse. It's a worse bound. But in most of the cases, the left-hand side is actually a better bound. So if you want the left-hand side to be a better bound than the right-hand side, I guess if you solve this, maybe you can even ignore the-- if you solve this, this is roughly saying that if N is smaller than-- I think I did this calculation at some point-- N is smaller than p to the 6 6, then actually, the bound on the left-hand side is better than the bound on the right-hand side, just because this p to the 100 is too big, right? So basically, the comparison-- basically, if you use this asymptotic language, things become a little weird if you consider other dependencies on other parameters. For example, if you have a dependency on the a dimension for modern machine learning, it's very high. So this is why I think asymptotics, even though they are very powerful, they don't necessarily always apply to the modern machine learning, just because it has the dependency for other terms in the higher order case, right? So this one has the dependency on p. So that's the main issue, basically. OK, so what we do-- how do we fix this, right? The first thing we need to do is to fix the language in some sense. We need to not only consider n goes to infinity. We have to also consider other quantities in this vault. So basically, what nonasymptotic does is that you care about-- this is just a term, or a kind of approach. This is basically saying that you only had absolute constant in your bound. You have to hide something, because if you have to care about every constant, it's going to be too complicated for theory, right? It's going to be a lot of calculations. But here, we allow us to hide absolute constant. But we cannot hide any other dependencies or any other things. So you are not allowed to have a dependency on p when n goes to infinity. And the absolute constant-- this really means that this is a universal concept connecting 3, 5-- something you can replace by a real numerical number. And actually, to make everything easier, so we are going to introduce this notation-- big O notation. This is actually-- sometimes, this big O notation has a little bit different interpretation. So I wouldn't say I'm redefining it, but I'm going to just be clear about what the big O notation means from now on. So now, big O notations from now on-- only as high as universal constant. And let me have, actually, a more technical definition, which is actually useful in some cases when you're really doing a lot of theories sometimes. I'm not sure whether some of you have this confusion about whether you should use big O or omega-- the big omega. Sometimes, it could be confusing. So let me define what this big O really means. It really means that-- so every occurrence-- at least, this is what it means in this course. It may not be exactly always the same for every paper. But I think people are converging to this interpretation. So every occurrence of big O of x is a placeholder for some function like say, fx, such that for every x in R, fx is less than Cx for some absolute constant C bigger than 0. So basically, this is saying that if you replace-- so maybe more explicitly, it's saying that you can replace O of x by fx such that the statement is true. So basically, if you see a statement with a lot of O of x, O of something, right, it means that you can replace all of these occurrences of big O notations by something more explicit such that the statement is still true. So it seem to be overkill as a definition of big O, which you're probably already familiar with. But in some cases, at least I've seen so many cases where I got confused. I have to kind of really literally verify whether I satisfy this definition. Anyway, OK. And also, just for notational convenience, sometimes we also write a is less than b. This is just equivalent to the exist absolute constant C greater than 0 such that a is less than C times b. And technically, if you really want to be very solid, this statement should only apply to positive a and b-- positive a and b. Because for negative ones, you probably, ideally, you should just write this only for positive a and b. That's my suggestion, because for negative ones, it just becomes a little bit confusing. So the point here is that there is no-- well, I defined this big O thing, right? So it depends on the literature. Sometimes, when people define big O, they have to define some limit. But here in this course, the big O just really means there's no limit taking-- you don't have to think about any limit. So for a and b functions here, because a and b are positive numbers, every number is less than every other number, right? Right. So a and b could be functions of other, more complex quantities. OK, cool. So these are just some notations. OK, so now, the bound we care about is-- so we are interested in this notation. We are interested in bounds of the form, like the excess risk-- so it's actually bound excess risk l theta hat minus l theta star by something like big O of some function of maybe p and n, where p could be a dimension and n could be the number of data points. Of course, you can replace this by a function of other things. But the point here is that after you write this, there's nothing else hidden in the big O, only a universal constant. And once you have this kind of language, you can compare things in a more proper way. And in the next few lectures, our goal is to basically show how to provide bounds of this kind of form. Sometimes, the bound could be more complicated, not only depending on the number of parameters and number of data points. It could depend on the normal parameters and so forth. The point is that we always only had universal constants. Any questions? So [INAUDIBLE] the theta is very [INAUDIBLE] for some function [INAUDIBLE] but could that be for all of them? For some, yes. That's very important, because if you replace it for all-- because here, there's also-- no, I think it's for some. So you literally only need existence of one function that has this such that if you replace your statement by f of x, the statement is true. So yeah, I think this is actually a very good question. Because I got confused by this many times. So maybe let's give an example, right? So you say, the excess risk is less than O of 1 over square root of 10. What does this mean? This means that you can replace this. This is your fn, right? You can replace this by [INAUDIBLE] sum of 5 over square root of 10 such that this is exactly true. But you don't need to say that for every f. So if you say for every f, then it means that if you place that 0.1 over the square root of n, it still has to be true, right? That's too much, right? You only need the existence of one. But of course, if you have existence of one f, then there's always other f, which is bigger, that can also be replaced. But you only need one f. And also, actually, maybe this is a little bit advanced. But this kind of interpretation also allows you to have big O in your condition, even. For example, this could be a little bit advanced. But for example, you can write for all-- if n is bigger than O of p, then excess risk is less than 1. I'm not saying this is a correct statement, but this statement would be interpreted as, if you replace this O of p by 2p, then it's going to be correct. Or if you replace this O of p by some function, some constant times p, it's going to be a correct statement. And it's not omega here. It's really big O, which is sometimes confusing. OK, cool. So now, let's move on to the key idea that we are going to have, right? So to bound these excess rates, how do we achieve a bound like this? The key idea is to somehow say that L hat theta is close to L theta, right? In some way, in some sense. I need to specify what I really mean by these two functions are close, right? Are they close at every theta, or are they close at a specific theta? So here is a small claim which tells what you really need. So what you need is that-- so suppose L hat theta star is close to L theta star. Suppose these two loss functions, empirical and population loss, are close at theta star. And also suppose they are close at theta hat. And here, actually, you only need one step closeness. So suppose you have both of those. Then this implies that L theta hat minus L theta star is less than 2 alpha. So basically, you just need to show that these two loss functions-- the empirical loss and population loss are close at theta star and at theta hat. Then, you can can bound the excess rates by 2 times alpha. And the proof is actually very simple. What you do is that you know that this is comparing theta hat with L theta star, right? And your condition involves comparing L versus L hat. So you have to do some arrangement to link them, right? So what you do is you say I want to compare these two. And I write this as a sum of three terms. L theta hat minus L hat theta hat. You first compare this L theta hat by L hat of theta hat. And then, you have L hat theta hat. You compare this with L hat theta star. And then, you compare L theta star with L theta star. Anyway, I don't know why the video freezes again. Let me restart it. OK, and the reason why this should freeze-- OK. So why don't we do the three things, right? Once you see it, it's kind of obvious, because this one is the condition-- one of the conditions, right? This one is the second condition. And this one is the first condition. And you also have this one, which compares directly theta hat and theta star. But this is comparing them at L hat. And you know that L hat theta hat minus L hat theta star is less than 0, because theta hat is the minimizer of L hat. So this term is less than 0. And this term is less than alpha. This term is less than alpha. So in total, you continue to get 2 alpha. OK? So basically, this is saying that it suffices to show the two conditions. The first condition is that L hat and L is close as theta star. The second condition is that L hat and L are close at theta hat. So it turns out that the challenge to prove these two inequalities are-- the difficulties are completely different. So let's say, if this is number 1, this is number 2, number one is very, very easy to prove. And number 2 will require a lot of work, which takes you a few weeks. Maybe not a few weeks, but two weeks. Is there a reason why in the first inequality [INAUDIBLE] value and the second inequality, it's not? The only reason is that of course, if you put absolute value here, it's still true, right? And actually, you can also bound the actual value if you want. The only reason is that if you don't have the absolute value to show these conditions are satisfied, it's a little bit easier, slightly easier. You need one fewer step. That's why in most of the books, you don't have that step. And also, you save a constant, a factor of 2. So actually, this is a very good question. In my first time I teach this, I just have absolute value. And then later in the lecture, I have to do additional steps to fix that constant, which makes it a little bit annoying. But fundamentally, you are right. There is no real difference. You don't run into that problem when you show the first inequality? You don't run into that problem in the first inequality, yeah, which I'm going to show the first inequality just right now. The first inequality is very easy. And I'll tell you why they are different. It sounds like they are very similar, right? So the difference is that-- let me see whether it's ready for me to talk about difference. Let me not talk about the difference first. Let me first show the inequality 1, and see why it's relatively easy. And so to do that, so the goal is to show 1. And the main tool we are going to use is the so-called concentration inequality. And this is, in some sense, a nonasymptotic version of the law of large number. So it's trying to prove the same things but in a different language and with a stronger form. So this is the nonasymptotic version, I guess, of central limit theorem. And now, you don't have to deal with the limit. You just have a bound that depends on it. And I think probably some of you have heard of this inequality called Hoeffding inequality. I think this thing probably is going to be taught in 109-- CS109 or some of the statistics classes. But anyway, you don't have to know it before as a prerequisite. And let me define the inequality. So this is trying to deal with a sum of independent random variables. So let's say x1 up to xn will be independent random variables. And suppose they are bounded. Each of them is bounded by ai and bi. You can think ai and bi are just constant, maybe 0 and 1. And almost surely, for every i-- and so we care about the mean. So the mean of this random variable is defined to be xi-- sorry, is defined to be mu. And so the central question is, how different is the empirical mean from the average, from the expectation? All right, so we care about how small is this. And this is a random variable. So you have to have a probabilistic statement. So the claim is that the probability that this difference is small is very big. Alternatively, you can say that the probability that this difference is big is very small. They are just the same. So you get how big it is. It's very close to 1. And the difference from 1 is this exponentially small number. And what's in the exponential is something like this. OK, so this is the formal statement. Maybe let me try to interpret it a little bit by instantiating it. It's a special case. So if you define sigma squared to be 1 over n times sum of bi minus ai square i from 1 to n, so then the sigma can be viewed as kind of the variance of 1 over sum of of xi. This is not exactly the variance, right? But it's some upper bound of the variance. Why? Because if you look at the variance over n times the sum of xi, you know that the variance is linear. So first of all, you get 1 N square in front, because the variance is quadratic. It's scaling. And then, in your relatives, you have the sum of the variance of xi. And then, this is equal to 1 over N squared-- the definition xi minus expectation xi squared. And now, because each of these xi-- xi is always between bi and ai, right? So xi is between bi and ai. And expectation of xi as a consequence also is between bi and ai. That means that this thing is smaller than bi minus ai, because both of these two quantities are in this interval. So the difference of them is also smaller than bi and ai. You get bi in a square for each of these terms. So that's why the total, the whole thing-- maybe, I guess, also including the 1 over n squared-- so the whole thing-- is smaller than 1 over n squared times the sum of bi minus ai squared, from 1 to n. Right, let me see why I'm missing a-- I think I have a typo here. OK, so basically, you can think of each of these-- bi and ai squared is the variance. And then, they would take the sum of them and you divide by 1 over n squared. That's kind of the variance of the whole xi. And suppose you take this view. And you can see what is this is saying. What this inequality is saying is the following. So if you take epsilon to be square root, some constant c times sigma squared times log n, so this is something like a constant O of sigma square root of log n. So you take epsilon to be something a little bit bigger than the variance by square root log n. Then, you know that you plug this epsilon into the Hoeffding inequality, where c is a large constant. Then, you plug in this to-- for example, c is larger than 10. And if you plug in this epsilon to the Hoeffding inequality, what you got is that-- so probability 1 over n times sum of xi minus mu. This is actually the most interesting regime of this inequality when you plug in epsilon to be on this level. Typically, when you use it, you always use epsilon to be this level. Because this is the useful regime. So when you apply it, you get this is less than O of sigma log n, because I replaced this for epsilon. And it's bigger than 1 minus 2 times exponential. Now, let's plug in epsilon. So you get, this is 2. Maybe let's first not replace epsilon. Let's first replace sigma, right? So you can see that the right-hand side-- by my definition of sigma squared, so this is the same as the Hoeffding inequality. And then, plug in epsilon, I get 1 minus 2 exponential 2 log n. 2 times big O of log n. I guess 2 is also in the big O, so you get-- right. And now, you choose this big O to be your large constant, right? So recall that this big O is-- you can replace this big O by a large constant, right? So then, you got this to be something like 1 minus, maybe let's say, I guess-- maybe here, it's easier if I just keep the c especially. I got 2 c here. This is c. Then you get n to the minus 2c. 2 times n to the minus 2c. And if you pick this constant c to be something like 10, then you get 1 minus 2 to the n minus 20, right? So basically, this is saying that you have a very, very high probability such that the difference is smaller than sigma log n. So in other words, with high probability-- so with probability, let's say, larger than 1 minus N the minus 10-- you have that the empirical mean is closer to the expectation in the sense that they are close in this sense, right? They're bounded. The difference between them are bounded by big O of sigma times log n. So basically, this is saying that if you think of sigma as the variance, as the quote unquote "variance," then you cannot be-- it's very hard for you to deviate from the mean by something much larger than the variance, right? So this is the deviation from the mean. And this is the variance up to times square root of log n. The log factor in this course is not very important. So this is saying you cannot deviate from the mean by a large factor of the variance. Of course, this variance is not a real variance. It's this perceived variance. Actually, we're going to get back to this concept. This is sometimes called-- there's a concept called variance proxy, which we're going to talk more about it. So in some sense, it's kind of like if you draw this, it's kind of like you are saying that this random variable-- suppose you would call this x hat, a random variable-- if you look at the distribution of this random variable, it's something like this. And the mean is mu, right? Suppose this is mu. And you look at something deviate from the mu by sigma square root log n. And then, you are saying that the mass in this part is extremely small. How small they are? They are smaller than inverse polynomial of n, right? So the mass here is smaller than n to the minus 2c or maybe inverse poly [INAUDIBLE]. So you can see that this bound can now be much, much smaller. And one of the ways to see it is that if this is really a sigma, it's really the standard deviation. Then your bound cannot be improved much, right? Because for any random variable, you always have some probability. So the bound cannot be improved much. Of course, this is a somewhat just intuition, right? Because I need to define what they mean by not improved much. But intuitively, this bound shouldn't be able to improve much, because for any random variable, you always have some mass. There's always some mass within mean plus minus 10 deviation, right? So if you really look at the interval defined by the standard deviation, there's always some mass in that, right? There's actual constant mass in that. So you cannot make these intervals much, much smaller and get the same bound, because if you get it too small, then you have a lot of messiness. So OK, cool. So now, let's interpret this a little more. So let's say we take a, and we instantiate even more. So let's take a to be on the order of maybe minus 1. It's a negative number. And b is on the order of 1, right? So this is typically the important thing, right? So your random variable is between minus 1, maybe minus a constant. But constant-- then what you have is that the empirical mean minus the expectation is smaller than big O of sigma square root log n. This is the same thing I have written. And what is sigma? Sigma is square root 1 over n squared times the sum of bi minus ai square. And this is something-- each of the bi and ai is on the order of 1. So you get 1 over n squared times n, because there are n of these terms. So this is 1 over square root of n, right? So sigma is on the order of 1 over square root of n. And that's the variance of your mean estimate of the empirical mean. So that's why if you plug in this choice of 2 sigma, you get square root n, square root log n. So basically, you cannot deviate by-- and sometimes, people write this as O tilde of 1 over the square root of n just to hide all the log factors. So if you don't care about the log factor, it's basically saying that you cannot deviate by more than 1 over square root of 10. It sounds very abstract for the moment. But in the long run, you'll see that this kind of thinking will be used many times. And it's actually useful to just burn this in your head if you really do machine learning theory for life. But you don't have to. But for me, this is something like-- basically, I already burned this into my head, in some sense. Any questions? Oh. OK, so this is a short review. I'm not sure whether the whole-- I think probably CS109 will get into these kind of details, but this is just kind of a review of Hoeffding inequality with a little bit of kind of additional interpretation. And now, if you apply Hoeffding inequality to our case, let's see what we can get through the empirical laws, right? Recall that our goal is to deal with this. The difference between this and this, right? And this one is 1 over n times the sum of the loss on each of the examples. And this one is really literally the expectation of the sum, right? And so this is a perfect case to use Hoeffding inequality, because this one corresponds to xi. But Hoeffding inequality requires a bound on a random variable. So we just assume that in many cases, the loss is indeed bounded. But here, we assume the loss is bounded between 0 and 1. If the loss is not bounded, you need a little bit more advanced tools to deal with it. But let's say for now the loss is bounded between 0 and 1. For example, if you use classification, your loss is 0 and 1 loss. Loss can only be 0 or 1. So that satisfies this loss for every x and y and theta, let's say. Then, if you apply Hoeffding inequality, what you get is that-- so this is a lemma. But actually, it's really just the application of Hoeffding inequality. So for any fixed theta, so suppose you-- so let me see. So L hat theta-- this is basically a sum of xi, right, where xi is this L xi yi theta. And so you can compute sigma squared, like the fake variance that we are thinking about. So the sigma squared defined was this-- bi minus ai squared, and from 1 to n. And I guess we have done this 1 over n squared times n, which is 1 over n. So that means that L hat theta minus L theta, right, is less than O of sigma square root of log n with high probability, right? And sigma is 1 over n-- sorry, sigma squared is 1 over n, so this is O of square root of log n over square root of n. And you can also write this as O tilde of 1 over square root of n. So basically, for every fixed theta, the empirical loss and the population loss only differ by 1 over square root of n with high probability. So it sounds pretty good, right? So we showed that they are very close. And how close they are? They are close by, well-- the difference is 1 over square root of n, which goes to 0, I think, goes to infinity. So it's supposed to be a small number. And there's no other things hidden here. Of course, you have a log factor in n, but you don't have any factor about, for example, dimensionality. Any questions? So there is a small issue-- go ahead. [INAUDIBLE] with high probability here is 1 minus 1 over n to some positive? Yep, yep. Exactly. So with high probability-- so technically, I should write the probability that this is happening. It's lower than 1 minus n to the O of 1. OK. And this is actually a good time to practice this big O notation. Basically, this is saying that you can replace-- actually, here, wait. Let me see. I think [AUDIO OUT] a big O of 1 and omega of 1. I think I should use omega of 1. But maybe I say c. I say, there exists a c such that a constant-- there exists a constant other than 0s such that this is true. Maybe-- yeah, you see, sometimes, this is confusing. On the fly, I couldn't figure it out. But this is what we mean. Maybe let's say-- It's 1? Maybe let's just say this is 10. I think I think this is definitely a correct statement, because there is the O here. You can hide everything in there. So that's what I mean. OK, cool. OK, so this is a correct statement. But there is a small thing that-- there is an important thing we should note here. So what do I mean by, for any fixed theta? What does this really mean, right? I have this header here. So this really means that you need to first take theta. And then, you draw after you take theta-- you draw iid, xi and yi from this distribution p so that these are-- well, why do you have to do this? Because you want to make sure that L of xi and yi theta-- these are independently distributed-- are independent for different i's. So if you pick theta first, and then you draw the xi's, then indeed, this random variable xi, which is equal to a loss, are independent. But this doesn't really mean that you can do this for theta. That depends on xi, which is actually what I'm going to talk about next. So first of all, you can apply this for theta. You can apply this for theta is equal to theta star. That's a lot. Because theta star is a universal quantity, right? You know what theta star is. The theta star exists even before you draw the samples. Why? Because theta star is the minimizer of the population risk. The population risk doesn't depend on the samples. It only depends on the distribution. So that's why you can apply this with theta equals theta star. So that's why we got this inequality 1, because got L hat theta star minus L theta star. It's less than O tilde of 1 over square root of n. So now the question is whether you can apply this to theta hat. And the answer is, no, you cannot apply it to it. And it's not only just because you have some subtle kind of mathematical rigorousness. It's really just-- it's very far from correctly applied to theta hat. It's not a small mathematical nuances kind of thing. And the reason is that there's a dependency issue, right? So as I alluded before a little bit, so the dependency is that you first have theta star, right? Theta star depends on the population distribution. And theta star is something that has existed before you begin to draw the sample. And then, you draw the sample. And then, you get theta hat, right? And then, you can compute, for example, L theta hat or L hat theta hat, these kind of things. But theta hat depends on the samples. So that means that L of xi yi theta hat are not independent with each other. So you cannot apply Hoeffding inequality, because they are not independent random variables. And this is important, because if you can really apply this, actually, you always-- if you can apply this Hoeffding inequality for theta hat, you'll always get 1 over square root of n. There is no dependency on anything. Then, machine learning would be much, much easier. We don't have to think about sample complexity. It's always small. So basically, the next, well, two weeks we are dealing with this, how do we deal with theta hat, right? So the idea to fix this is called uniform convergence. And the key idea is that you want to apply this-- apply Hoeffding to any theta that is predetermined before drawing data, right? You can apply this to any theta that's predetermined before drawing the data. I guess this might sound a little bit-- by itself, a little bit vague. So what I really mean is that you want to prove that-- so what we know now is that for every theta, probability L hat theta minus-- so for every theta that has nothing to do with our samples, you know this is true for some-- of course, I didn't specify exactly what epsilon delta is, but this is the form of the theorem we are proving right now-- we can prove right now. And we have proved this for-- and you can plug in theta is equal to theta star. That's fine. But this is not the same as-- this is true. So the second statement is what I'm going to prove in the next one or two weeks. But these two are two different statements. The second statement is saying that you first draw the samples, and then after we draw the samples for all theta, these two functions are close. Maybe it's useful to draw a figure, right? So there is a function called L theta, right? And here, this dimension theta in the y dimension is the L theta. And now, let's look at what's the empirical loss. So the empirical loss-- so I guess maybe let me give example where these two statements are different. So let's think about-- there are only two, three cases. So this is a very poor example, right? So consider the case where L hat theta is this function. So it's the right function with probability 1/3. And it's the orange function with probability 1/3. And maybe let's say it's the green-- this is a sign, I guess. Didn't have different color-- green function, with probably 1/3. And so what you know is that for every theta-- so for any fixed theta, if you look at the probability that L hat theta is different from L theta-- let's say they are just different. So what's the chance that they are different? So this chance is something like 2/3, right? Because if you look at any point, any theta-- for some theta, actually, the three functions are always the same, right? They're always the same. But maybe, for example, if you pick a point here, if you look at this point, then with probably 1/3, L hat theta is this right point, which is different from L theta. And with these other two possibilities, right, it's probably 2/3. L hat is equal to L theta, right? So basically, for every theta, you have some-- sorry, I should write this is equal to 1/3, right. So basically, for every theta, you have something like this, right? And on the other hand, if you look at some statement like this, if you look at this, for every theta, L hat theta is close to L theta. Then, what's this thing? This is saying that basically these two functions are the same globally. And clearly in any of these red, yellow, and green cases, this probability is 0. Because in both of these three random cases, the two functions are not always the same, right? There's always some chance that there are some differences. So that shows that you cannot easily switch the probability for all qualifiers. They are just not switchable. I guess you probably have seen that also sometimes, some of you probably would expect this, that this is about union bound, right? When you do union bound, there is always these kind of things-- whether you can switch the probability with all kind of terms, which we are going to talk about in a moment. So basically, this is probably-- I hope this is demonstrating that it's kind of more difficult to prove L theta hat minus L hat theta-- to prove equality 2. So the take home point is that it's more difficult to prove equality to 2. What's equality to 2? Equality to 2 was the difference between L theta hat and L hat theta hat. And the reason is that theta hat is a function of the data set. And you lose the independence. And so the goal of many of the rest of the lectures is to show that this is indeed bounded using the so-called uniform convergence. And by uniform convergence, let me just summarize. I hope you've got some intuition here already. We need to prove something like probability that for all theta, theta hat close to L theta, it's less than epsilon-- is larger than 1 minus delta. So we need to prove something like this using some techniques. And you will see that you're going to get much looser bounds when you prove something like this. The epsilon delta would be different from the epsilon delta that you can get when the [INAUDIBLE] quantifier is outside the probability. So I guess I will show how to prove this kind of bound in the next two lectures. But just, I guess-- so that this suffices, I guess as expected, if I have a claim 2-- so you know that L theta hat minus L theta star is less than-- I guess, by claim 1, this is less than the differences between L theta star minus L hat theta star plus L theta hat minus L hat theta hat. And this is less than 2 times the sup over all theta-- L theta minus L hat theta, right? So if you can show that for all theta they are similar, then you have a bound for the excess risk. So maybe just, in some sense, if you draw the picture here, so basically, what you want to show is that suppose this is the population risk L theta. You want to show that with higher probability, your empirical risk is something like this, which is kind of uniformly close to the population risk. That's kind of the intuition we have. And actually, let's see. So yeah. And actually, in the second half of the lecture, like after week five or week six, we're also going to talk about that it's not actually-- this picture is actually not entirely accurate in the sense that indeed, in many cases, the empirical risk is kind of bounded by some epsilon, right, within the epsilon, within the population risk. But also, it doesn't look like it's kind of fluctuating. So what really happens is something like, maybe you have a function population risk like this. This is population. And the empirical risk is something first of all, close to the population risk, but also in terms of the shape and the curvature, it's also close. So it wouldn't be that fluctuating. It would be something like maybe this. So not only in terms of the value they are close, but also in terms of some other properties-- maybe the curvature, the shape-- they are also somewhat close. And this is useful for certain kind of cases when you especially care about optimization, right? So for example, if the empirical risk is so fluctuating, then it becomes harder to optimize and we care about optimization sometimes, you want to show that the empirical risk can also have nice properties for your computational purposes. OK, I guess that's a perfect stopping time. OK, thanks. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_16_Implicit_regularization_in_classification_problems.txt | OK. Hi, everyone. Yeah, let's get started. So I guess today, we're going to talk about-- continue to talk about the implicit regularization. So last time we have talked about the implicit regularization of initialization, and today-- this is last lecture. Actually, last week we-- in the last two lectures, we have talked about implicit regularization of initialization, and today we're going to have two parts. So one part is we continue with the implicit regularization, and this is a better characterization in certain cases, as I will describe more. And another is that we're going to talk about classification problem. In all the past few examples, we are talking about a regression problem. And it turns out that for classification problem, the behavior is a little bit different. And instead of converging to some minimum norm solution, you converge to max margin solution, which is, in some sense, similar, but not exactly the same to the regression case. OK? So with this lecture, we're going to conclude the discussion about the implicit regularization of initialization, and then next lecture we're going to talk about the stochasticity. And that will be the last lecture about the implicit regularization. So today we're going to have-- so the first part-- this is number one, number two. So the first part, we're going to talk about a more precise characterization, certain characteristics about the implicit regularization effect of initialization. You can see exactly how the initialization influenced the regularizer. And to have some preparation, for today's lecture we're going to talk about the so-called gradient flow. I was trying to avoid this notion in the past, but I think the spirit has shown up in the past as well. So basically this is gradient descent with infinitesimal-- gradient descent with infinitesimal learning rate. And the reason why this is useful is because in certain cases we have infinitesimal where you can ignore some of the second order effect from the learning rate. And so this is just a mixed analysis. It's much simpler. You don't have to say how small the number is. You don't have to deal with the second order effect because the second order effect is just literally zero. There's no second order effect. And the way to do this is that it's actually also kind of a pretty clean formulation of optimization, even though it's continuous time. So what you do is you say you have a loss function. Let's say Lw is the loss function. So if you do gradient descent, then what you do is you say you take wt plus 1. And now I'm using the parentheses for time because I'm going to use that for the continuous time. So wt plus 1 is equal to wt minus eta times the gradient of the loss at time t. This is what you will do with gradient descent. And if you scale the time by eta, what I mean is that now, currently, we will do gradient descent every time you increase the step counter by 1. So before the time is t, and now the time is t plus 1, right? And now suppose you don't do that. You change the time scale. You say, every update I only advance the time counter by eta instead of by 1. So what I'm going to get is that I get w of t plus eta is equal to wt minus eta nabla L wt. And these two process, effective, are the same. It's just that the time, the unit of time, changed by a factor of eta or 1 over eta. And now if you scale the time, then you can take eta to go to 0. So take eta to go to 0, and then this becomes a differential equation or kind of like a continuous process. So you can write this as wt plus dt is equal to wt minus eta gradient L wt. I guess depending on what kind of community you come from, you can also read this wt dot, right? This is a derivative, with respect to-- I guess or I think here you also replace eta by dt. This is how you take the eta to go to 0. And this is effective saying that the gradient-- so that the derivative of w respect to t, which we denote by w dot t is equal to minus gradient L at w at time t. Where w dot t, this is just a derivative of w with respect to the time t. And in some sense, this allows us to ignore the eta squared term because eta squared here becomes dt squared, which is 0 compared to dt. So that's why this is useful. This will be useful for us. In some sense, this is mostly to simplify the equation. All the technical meat, in some sense, are the same. It just makes the analysis cleaner. And for the next two examples, in both of the examples, I'm going to use this gradient flow formulation for gradient descent. OK? So now let's talk about the model we're going to discuss. So the model is a variant of the last lecture, and there are some reasons for changing the model a little bit. I'm going to discuss that, but it's not super important. So the model we're going to do is that you have some quadratically parameterized linear model in some sense. So we have two parts. Let me write it down. So you have some vector w plus, and you take this o dot 2 minus w minus o dot 2, transpose x where we use this notation. xo dot 2 just means xo dot x. And the dot, this dot o, is this entry wise product. So w squared minus w minus squared transpose x. And w and w minus, these are both vectors in rd. And you can write w as the concatenation of w plus and w minus as the parameter. And so basically, this is very similar to what we did last time. So last time we had something like f beta of x, which is beta over dot beta transpose, right? So basically, now you have a negative term in it instead of just positive. And the reason is there are two reasons. So there are two benefits compared to last time. So they are not super important, but I know why you mention it. The one thing is that it's at beta, so this model last time can only represent positive linear combination of x. And now this fwx can represent any linear model. Right? Because before if you take the entry wise product of w and beta and beta, it's always positive and non-negative so you can only have a non-negative linear combination of the coordinates of x, and now it can have negative coordinates. And another benefit is that-- now if you initialize-- if you initialize w plus 0, I times 0 to be equal to w minus at time 0 to be the same thing. Then, as you can see, fw0x is equal to 0. And I guess for every x, it's just because the positive part cancel with the negative part. And I guess you have seen these kind of things before. This is mostly for convenience. So this will make the analysis even more convenient because initialization has zero functionality, and we have-- kind of use this for the NTK, and this will be useful for our analysis. And actually, what we are going to do today is that with this thing, we're going to see that if you change initialization, actually you're going to get different regularization and you can precisely characterize, how does the regularization depends on initialization? And in one of the cases, it will be the NTK. In the other case, it will be similar to what we discussed in the last time. So yeah, maybe I should mention that. And continue with the setup. So the loss function, the L times w will be just the square loss. And we can consider initialization, as we discussed. We'll consider initialization w plus 0 is equal to w minus 0, so that's-- you get the 0 initialization as the functionality of the initial model as 0. And also for simplicity, we choose this to be alpha times L1 vector, and alpha is the thing we're going to change. We're going to see that how does the regularization, implicit regularization effect, depends on the scale of alpha. So when alpha is small, it gives you something. If alpha is big, it gives you something else. And this L1 vector is chosen as before. In the previous lecture, it's chosen somewhat for convenience. You can still do it with other initialization, it's just that the form will be a little bit more complicated. And you can also have this-- we can also have this theta space. So suppose theta corresponds to w plus. So let's say that w is defined to be w plus the power of 2 minus w minus to a power of 2. This is the actual linear function you compute, the linear function of w, right? And we can define-- and we are interested in what kind of linear model we are learning in eventually. So let's say w infinity. Let's define this to be the limit in time infinity. So this is how the gradient flow-- where the gradient flow converges to. And let theta sub alpha infinity, and I think sometimes we just call this-- this is basically w, the model corresponding to w infinity. So this is the coefficient that we care about. We care about when you converge to infinity, what's the corresponding theta you get? So what's the property of this theta? And sometimes we just call it, for simplicity, we just write theta alpha and omit the infinity. But this is where it converges to. OK? And for the sake of the simplicity of the lecture, we assume everything kind of have a limit, so and so forth. So all the regularity conditions are assumed to be met. OK. And also, just to set up some notation, let this x to be the data matrix. And this is the matrix in 2D, and let's say y vec is the label matrix-- label vector. OK. So now, here is the theorem that characterized how does it characterize the implicit regularization. Let me write it down and then interpret. So for any alpha assume that you converge to a feasible solution. So assume that-- to a solution that fits the data. So in a sense that x theta alpha is equal to y vec. So if this is satisfied, it means that we fit the data exactly. And I'm using a purple color for this is because I don't feel that this is necessary have to be assumption. You can prove this, actually. The paper did assume this in the theorem, but I don't think that you have to do it. And actually, I just checked with [INAUDIBLE],, the author, two days ago, and he also thinks that you don't need it. But this is not formally stated in the theorem, so I assume-- but I do strongly believe that you can prove that you can converge to such a feasible solution. You don't necessarily have to assume it. But anyway, let's assume it just so that we are consistent with the paper. By the way, this is the paper by-- I guess I'll probably add the link. Woodworth in 2020 as part of the paper as something like "Rich and Kernel Regime." I don't know. I don't-- I'm forgetting what's the-- "Rich and Kernel Regime in Overparameterized Models. It's a pretty recent paper. It just showed up, like, two years ago. One year ago, actually. So suppose you-- OK. So suppose you have this. Then we know theta alpha is not only a feasible solution. It's not only zero order solution, but also it's actually the minimum norm solution or minimum complex solution according to the following complex dimension. That is the minimum complex solution. So you get arg min over theta such that x theta equals to y. So among all the feasible solution, you will try to find the minimum complexity where the complexity's defined by Q alpha. And what is Q alpha? So Q alpha is a function of alpha, so the complex measure does change as you change alpha. And Q alpha is equal to alpha squared times-- the alpha squared doesn't really matter because it's a scalar. I'm going to do an argument times some function of each of the coordinates q of theta i over alpha squared. And what is this little q? This little q is a one dimensional function where this little q is a function that maps r to r. Actually, it maps r to r non-negative, I think. And qz is equal to something that I don't expect you to interpret, but I think we're going to look at special cases which we can interpret. So this arc sinh. I guess this is pronounced as "sunge" in US, or "shine"? "Singe." OK. Anyway, arcSinh, sin hyperbolic, right, so z over 2. I think UK it's called "shine." I don't know why. Yeah. OK. All right. OK. So the first other bit is that even though you didn't minimize this complex machine algorithm, you only ran gradient descent, somehow you find the minimum complexity solution, and the complexity is defined by something like this. And now let's try to interpret-- this is the abstract theorem, but the important thing is that in particular, when alpha goes to infinity-- so if you have a very large transition, then this complex match-up q with theta alpha, this is something like theta i alpha squared? This is something like theta i squared over alpha to the 4. So which means that the Q alpha theta is something like, I guess, 1 over alpha squared times the 2 norm of theta squared. So basically if alpha goes to infinity, then the so-called complex measure, the q alpha, is the L2 norm of theta. And so if alpha goes to 0, then what's the complexity measure? So the complexity measure is what is the regularization effect or complexity measure? This is q theta i over alpha squared. This is roughly something like theta i absolute value over alpha squared times log 1 over alpha squared. I don't expect you to verify this limit because you have to do some kind of Taylor expansion to see this, but this is the thing. So then this means that a q alpha theta is, in some sense, the 1 norm of theta. I guess over alpha squared terms log 1 over alpha squared. But the content doesn't really matter because it doesn't change the order of different theta. So [INAUDIBLE] this the Q theta [INAUDIBLE]?? Oh, sorry. Sorry, sorry, sorry. That's-- yeah. Yeah, that's what I mean. Cool. So basically this is-- or in summary. So in summary, when alpha goes to infinity, then this is minimum L2 norm solution in a theta space of theta and an L4 norm for the W. Right? Because theta is the square of w. And when alpha goes to 0, this is similar to what we discussed the last lecture. So you have the minimum L1 norm of theta, which is the minimum L2 norm of the w. So this regime is what we have seen in the last lecture, and with a very similar model, right? But this characterized the whole regime. And between alpha and 0, you basically have interpolation, some kind of interpolation, between L1 and L2 regularization. So that's why this is more precise than before. So you know how those things interplay. Of course, it's kind of like for any particular alpha, this q function is a little complicated. But it's just the sum-- kind of like sum of some power of theta i in some sense, but the power is kind of between 1 and 2. You can think of like that. But it's not exactly a power, but it's something like that. So alpha [INAUDIBLE] scale [INAUDIBLE]?? Oh, OK. Say again? Alpha is the scale of this? Yes. Yes. Alpha is the scale. This is the only thing that depends on alpha in the algorithm. So we say that the initialization is alpha times L1 vector. And this L1 vector actually can change it as well. I think if you change it to an arbitrary vector for this regime, for this case, you actually don't change anything. So for this limit. So for the other limit, I think you change a little bit because the L2 norm becomes weighted by the initialization, by the particular initialization if it's not L1 vector. If it's L1 vector, the weighting is just all the same for all coordinates. If they are not L1 vector, it's not L1 vector, then you have a different weighting. Yeah. Those can be found in the paper. For simplicity, I didn't show the exact weighting. Right. And so here are some intuitions about this interpolation. And in some sense, just to connect to what we have-- in some sense, this is kind of like-- you can view this as a unification of what we have discussed in the past few lectures. When alpha is small, this is small initialization. So this is basically similar to the previous case, a similar intuition to previous lecture. But there is a small thing. So it's not-- note is that it's not exactly-- so this is only when it goes to a limit. So you have the minimum L2 norm solution. When alpha is not 0, it's not exactly minimum. The regularization effect is not exactly closest, closest solution to initialization. I think the paper shows-- there's some tiny differences, in some sense. So only when alpha goes to 0 you can basically say this is the closest solution to initialization. But generally, this is something we have discussed last time. And when alpha goes to infinity, actually, this is indeed-- this is the NTK regime. And why this is the NTK regime? I'm going to show you. This is kind of similar to what we have discussed before, but let me just do it again. So why this is NTK regime? So let's look at the-- recall that in the NTK regime, we have these so-called two parameters, sigma and the beta. So beta there was the smoothness or the Lipschitzness of the gradient and this is the condition number, right? And we say that-- and recall that we had this discussion that sigma beta over sigma squared mattered. Right. So if this goes to 0, then you are in NTK regime. You can approximate that quadratic. And now let's compute what sigma beta is in this case. So the gradient with respect to the initialization, x-- so w is 0. w0. Let's take this to be alpha L1 vector. w plus and w minus will be both L1 vector. Alpha times L1 vector. And we can compute the gradient initialization x. So there are two set of parameters. So the gradient for the w plus, I think you can-- if you compute, is something like this. And the gradient for the w minus is this to this. Oh, sorry. This is-- I'm sorry. Elementwise product. You can do this easily just by chain rule for every dimension. And this is just equals to-- I guess there's a vector two here, which I oftentimes probably don't look at. And this is 2 alpha-- because these two are just L1 vector, so this is 2 alpha x minus x. Sometimes this. Right? And now you can see that sigma and beta both linearly depend on alpha, right? So what is sigma? Sigma is the condition number of-- sigma is the condition number of the gradient matrix, the feature matrix that consists of the gradient for every data point. And the condition number scales linearly in alpha, and beta is the Lipschitzness, which also scales linearly in alpha. It's because alpha is multiplied in front of the gradient. So both of these scale linearly, so that's why beta over sigma squared will converge to 0 as alpha goes to infinity because below you have to go to dependency on alpha. And in the denominator, you have-- in the numerator you have degree-- you have a linear dependency on alpha. So that's why this whole thing goes to 0 as alpha goes to infinity. And also, when alpha goes to infinity, this is the NTK regime for the trivial feature because now the xr field of x is this thing. This feature map is really just that literally-- this, right? This is just literally the trivial feature because the only thing you did is that you flipped the x, which doesn't really make any difference, essentially. So basically you've got the minimum norm solution so you got-- so if you believe in the NTK perspective, you should get the minimum norm solution. So the NTK perspective will also tell you that you get minimum norm solution. Like, according to the features, right? Minimum norm solution-- minimum L2 norm solution using the feature x minus x. And x minus x is, in terms of a feature, is not very different from x itself. So basically you just get essentially the minimum L2 norm solution for the linear model. So that's the same as we discussed-- the same conclusion as we discussed above. Any questions so far? So the question is why NTK gives you the minimum norm solution. I think this is just because we will do the kernel method. So when you use kernel method-- so NTK tells you that you are doing kernel method with certain features, right? So NTK means kernel method. And the feature just turns out to be this trivial feature, and kernel method with the feature just gives you the minimum norm solution. That's what the kernel method does. So because what kernel method does is that you-- that's just what the kernel method do when you don't have enough data. When your feature dimension is bigger than the number of examples in the kernel method, you are learning the minimum norm solution for the features because otherwise you have to define something, right? So the kernel method, everything is L2, so you are minimizing L2 norm. That's implicitly in the kernel method. [INAUDIBLE] Yeah, that doesn't depend on initialization because it's a complex problem. Yeah. And you use a particular algorithm when you do the kernel method. That algorithm gave you the minimum norm solution. OK, cool. OK. Any question? One question? [INAUDIBLE] going to infinity [INAUDIBLE] respective [INAUDIBLE],, like, it could lead you off of the [INAUDIBLE]?? But I guess that goes a little bit back [INAUDIBLE].. We would say that there is [INAUDIBLE].. Yep. So I guess, in some sense, repeating the question and also answering it. So when alpha goes to infinity-- so yes. So your problem will be very ill posed. In some sense, the optimization landscape will be very bad just because your function will be not very smooth. And this part is hidden here because you are using gradient flow so you can-- so you are using infinitesimal small learning rate. So that's why it's hidden under the-- it's swept under the rug. And then practically, you also don't necessarily want to use the large learning rate. For one reason is the optimization, and the other reason is that maybe the L2 norm solution is also not good, right? So you also want-- you have an L1 regularization, at least for this particular setting. So that's another reason why you don't want to use very large learning rate. sorry, very large initialization. And another thing is that in practice, people sometimes do use-- this is about the empirical setup. Sometimes you do use large initialization, but people don't use infinitesimal small learning rate. So then you still cannot get into the NTK regime. But that's a good thing because you don't want to go to the NTK regime. So that's why, at the beginning, some people have confusions because at the very first paper by this NTK paper, I think they are claiming the initialization scheme they are studying is actually what people do in practice, and that's kind of true. It's very close to the Kaiming He initialization or the Xavier initialization in terms of the scale. But because they are using very, very small learning rate, so it's actually not really-- the theoretical setup requires very, very small learning rate. But empirically, you don't use those small learning rate, and also the theoretical setup doesn't have the stochasticity. So all of this together, it makes the theoretical setup different from the empirical setting. And that's a good thing because the theoretical setup says that you don't really do anything super different from kernels. Yeah. OK, so now let's discuss the proof of this theorem. So I don't have-- there is a little bit-- this proof is kind of interesting in the sense that the proof is similar to actually the linear regression model similar to the linear regression proof, but not similar to what we discussed last time. Not similar to last lecture. You would probably guess this is similar to last lecture because the last lecture has almost the same model as this one and is only doing a subcase of this when alpha goes to 0. But it turns out that the proof is very similar to the linear regression one, and you have these two steps. The first step is that you find the invariance maintained by the algorithm, by the optimizer. And recall that this invariance was that theta is in the span of xi for the linear regression. This was probably two or three lectures ago when we analyzed the implicit regularization effect of initialization. for linear regression we say that because you need a 0 and you use gradient descent, you always need a span of the data. And here we're going to find a different environment, which is more complicated and even harder to express, but we're going to find the invariance. And then you use the invariance. So step two, in some sense, characterized the-- I guess characterize is a very vague term, but characterized the solution using the invariance. And sometimes you use the invariance as additional information, right, to pin down which solution you converge to. In some sense, the difficulty is that if you-- without any additional thing, you just know that you convert to a zero order solution. You don't know which one it converts to, and the invariance tells you that which one it converts to and the invariance depends on alpha. And there's nothing about population versus empirical. Everything is empirical here. I didn't even define where the data comes from. I only tell you that this is the minimum norm solution such that the empirical error is zero. I don't have to care about population at all. So yeah. How does this kind of technique compare with the techniques we discussed last time where you use the fact that the data, the empirical loss, concentrates around the population loss in certain regions and you somehow do some kind of control of the dynamics? I don't know how. It's kind of hard to compare. These are two different approach. There is some good thing about this kind of approach because this doesn't require population. That sounds a good thing. But the bad things about this approach seems to be that it's very hard to find invariance for harder models, for more complex models. You will see the invariance is a little bit kind of magical somehow. But that's that. For more complex models, even the previous approach, the approach we discussed last time, wouldn't work either. So it's hard to say. Anyway, so let's proceed to see how does the proof work. So we need a little bit notation to somehow simplify our x position. So let's say let this x tilde to be the extended data matrix. So you extend the matrix to-- this is to deal with the-- you concatenate x and minus x so that you get a n by 2d matrix. And sometimes this is just to try to write everything in matrix notation so that you don't have to have the minus thing. So we would take wt to be the concatenation of w plus t and w minus t. This is of dimension 2d. And let's take a wt o dot 2. I guess we say that this is the entry wise power of wt. And this means that with this notation, x tilde times wt o dot 2, this is x minus x times w plus t dot 2, w minus t dot 2. And you can verify this is really the same as the-- this is just the way the model computes, right? This is the model output in data points. So I just want to use this so that you have the matrix notation, and now you can compute with a derivative of t, with w dot t. This is the gradient because we are doing gradient flow, so this is equal to the gradient of L at time t. And what's the gradient of L at time t? This is gradient of this loss function, and now can be written as-- I guess the loss function of wt now can be written as x tilde wt o dot 2 minus y vec 2 norm squared. That's because I vectorize everything. So I copy and paste here and then I take the gradient. Taking the gradient, you can use the chain rule. So if you believe v has got a a direction with this is x tilde transpose rt entry wise times wt. Where rt is equal to x tilde times wt o dot 2 minus y vec is the residual vector. So if you are familiar with linear regression, you will realize that this is kind of like-- this is what you got from the-- so this is if it was linear regression. If it is a linear regression, then this term will be our gradient. And now it's not linear regression because you have the quadratic parameterized model. That's why you also have to do chain rule to look at the derivative of the quadratic of wt. That's why this-- this is because it's quadratic. Anyway, so this is one way to think about why this is true. But the formal verification would be just that you look at-- you do chain rules [INAUDIBLE]. [INAUDIBLE] Oh, sorry, sorry. There should be a-- I think there's a 2 here. Wait. Let me see. I think I wanted to make it 2. So that means I think my loss function should have a 1/2. So where's my loss function? The loss function has a 1/2. I guess I also defined a loss function somewhere else before. Here. Right? That sounds good. My brain just automatically removed all the constants so it's very hard for me to deal with this. OK. Cool. All right. OK. So now, how do we-- we said that we want to have some invariance for this in some sense, so we want to somehow solve this differential equation. But you cannot really solve it exactly. I'm not an expert on solving differential equations, but I think this is beyond the scope of-- this is something you cannot really have a closed form solution. But interestingly, you can do-- you can get something without solving it, exactly. So we claim that-- so actually, this is a-- in a paper, they claim it's easy to verify this. You can claim that wt satisfies the following. This by this times exponential minus 2x 2 transposed t rs ds. OK. Why this is the case? So first of all, this is not a solution. This is not like-- depending on what you mean by solution. This is not necessarily my definition of closed form solution because rs still is a function of w. But it's going to be something very useful for us. And why this is true? It's actually relatively simple, but so here is the reason. So this is because what you can do-- suppose you have a differential equation something, like u dot t is equal to vt times ut. So I'm trying to abstractify it a little bit so that I can give a clean analysis. So you can see that this is a good abstraction of what we had before because before on the left hand side, you have the derivative w and on the right hand side, you have something times w itself. So this will be u. This will be v, this will be u, and this u dot t. That's my abstraction. And then suppose you have such a thing, then you can always do the following. You can say that u dot t over ut is equal to vt. Right? That's always true. And then the left hand side, this is a magical thing in many cases. This is log of-- of the log of ut. It's by chain rule. I think probably you have seen this in other contexts like policy gradients and other cases, depending on whether you know any of those. But anyway. And then you can integrate both sides. So you can say, if you integrate, you got a log of ut minus log of u0 is equal to the integration of the right hand side like this. And now you remove the log and you get exponentials. Ut over u0 is exponential times integration of this. And now if you map u to w and v to this-- I guess u to a coordinate of w and v to be a coordinate of this x transpose rti, then you can apply this and you get the desired result. And by the way, I think I need to make a remark here that this is entry wise application of exponential. So this is a vector. This is another matrix. So a matrix from a vector. This becomes a vector and you actually take entry wise exponential and you take the entry wise product with w0. OK. Any questions so far? So now, let's see why this is useful. It's a little bit magical, in my opinion. I don't have a-- conceptually I think this is fun, but I think the proof on a proof level is-- somehow there is a little kind of-- either you can call it coincidence or magic. So there turns out that this is all you need to verify this is a good solution. This is the minimizer of the solution. So first of all, we turn this into something about theta. So now we have a characterization for w, and let's turn it into something about theta. So recall that-- and also we simplify this a little bit. Recall that w plus 0 is alpha L1 vector. w minus 0 is alpha times L1 vector. That means that w0 is also L1-- alpha times L1 vector. This is in 2d dimension because w is a concatenation of w plus and w minus. So that's why this thing, w0, is basically not important. You just have alpha. So in the theta t to the theta at times t is w plus 1 power of 2 minus w minus t to power 2. And this will be-- OK, let's use this formula. Let's use this formula. Let's call this one using one. So the w0 doesn't matter. The only thing it contributes to is alpha, so we get alpha squared. I guess I'm not sure. Let's see. So maybe I'll just do this small characterization here. So x tilde transpose is x transpose minus x transpose. So that's why exponential minus 2x 2 dot transpose some vector v-- v will be this integral. So this will be a vector like-- let's say suppose you take this, the power of 2. Then this will be exponential minus 2x transpose v because this part is from the first part and the exponential minus 2-- times 2x transpose t and then to the power 2. OK. And then this-- and this power 2 will become 4 because this exponential. So we get exponential 4 minus 4x transpose v exponential 4x transpose v. So this small derivation is trying to deal with this, so you know that this thing to the power 2 will be something like this. And then the first power corresponds to-- the first power corresponds to w plus and second power corresponds w minus, so that's why you get something like here. w plus squared will be exponential minus 4x transpose this and minus exponential 4x transpose this. I guess what I'm doing here is just trying to make you believe that this derivation is true, but it should be trivial derivation. There is nothing difficult. OK, so this is the characterization of theta. And you can see that this is this exponential of this minus x minus of the same thing, you can write this more succinctly as the sinh, the sinh for x transpose. This is just a better definition of the sinh. I think sinh uses something like exponential t plus exponential minus t over 2. Something like that. OK. So basically, we have a calculation of theta. Right? And then you know that the theta alpha is equal to theta alpha at theta at infinity. So this is equal to 2 alpha squared minus 4x transpose. 0 to infinity rs ds. OK. So this is something we know that theta alpha, the final point, satisfied. Maybe let's call this equation two. And we also know that x theta alpha is equal to y because we assume, or we can prove-- I guess we discussed this. I think we can prove that you converge to a feasible solution. And I'm claiming that one and two-- so two and three, these two things, turned out to be the optimality condition of the program. Let's call it one. So one is this arg min theta. OK, I guess it's far, far away. So here. Let's call this program program I. So you want to say that theta alpha is the minimizer of this problem one. And it turns out that theta alpha satisfies these two equations, two and three. And these two equations are the optimality condition of that optimization program, program one. And that optimization problem only has one solution because it's convex so that's why this theta alpha is the solution. That's the plan for the next. Right. Sounds good. So by optimizing condition, I really mean KKT condition. So I'm not sure whether all of you are familiar with the KKT condition, so I guess there are two ways to think about this. This is just a small thing about-- background about KKT condition. So these are optimality conditions for constraint optimization problem. To be honest, I never really remember exactly what the KKT condition in many cases. So what I am going to show you is one way to think about it, which is probably not exactly the same as what you can read from the book, but it's going to be very similar. So suppose you have these kind of things, optimization programs like this, and q theta is a convex thing. And so first of all, the KKT condition is the following. So it says that q theta is to be equal to x transpose v for some v in dimension. I think this dimension is n. And then x theta needs to equals to y. So this is the KKT condition for this kind of program. And one thing you can do is you can just look up a book and just invoke a theorem from a book which says that this is KKT-- the optimality condition. The way I think about it is the following, if you're interested in it. So the way I remember this is that I remember this, or I derive this every time if I need it, as follows. So I think the insight is that optimality at least means that there is no first order update. There is no first order local improvement. So if you perturb your solution, you shouldn't have a first order improvement locally. If your perturb solution locally a little bit by a infinitesimally small amount, you shouldn't get a bound to your first order improvement. But you also have to satisfy the constraint, so you also-- no further other local improvements satisfying the constraint. Satisfy the constraint also up to first order because you may not be able to. So what does this mean in this case? It means that suppose you consider the perturbation the alpha theta. This is a perturbation. So how do you satisfy the constraint? To satisfy the constraint, you have to say that the perturbation needs to be orthogonal to the lowest span of x because if it's not in the lowest span of x, you perturb it, you may change the x theta and then you change the-- you don't satisfy the constraint anymore. So this is the way to satisfy constraint so that x 0 theta is 0. That's how you make the constraint work. And now let's look at theta plus the other theta, the local perturbation. And so this still satisfies the constraint, and let's see what's the value. So let's see what's the value of q. So q theta plus the other theta. This is equals to, up to the first order, q theta plus the other theta. [AUDIO OUT] We cannot hear you. Maybe let's try this. Can you hear me now? Thanks for letting me know. Yes, sir. Thanks. Is the audio good or not? Is it OK? Yeah, it's OK. OK. So I'm using my laptop's microphone, so maybe let me turn it in some way so that it works better. Yeah. Thanks for letting me know. So maybe I'll rewind a little bit back. I don't know for how long time you have lost me. So I guess maybe I'll just briefly go through the steps that we have discussed. So I guess I was saying that if you have the perturbation, you always satisfy the constraint because the perturbation is the lowest span of x. It's orthogonal to the lowest span of x that you always satisfy the constraint. And we want to understand-- we want to figure out under what condition this perturbation will never improve your function-- will make the function bigger. Because if it makes the function bigger, it means that its point is not optimal. So that's why you look at the Taylor expansion of this q and you found out the first order changes is this term and you want this term to be always non-negative-- nonpositive because if it's positive, then it violates the optimality assumption. So it's a necessary condition is that this term is always nonpositive, but this term is very easy to make sign flip because we can use the flip the other theta by whatever sign you want. So that basically means that for every theta in the orthogonal space of lowest span of x, this term has to be just literally 0 because if it's not 0, you can flip the other theta to make it positive. So that's why we are saying that here, for every delta theta that is orthogonal to the lowest span of x, this term is 0. And that really just means that this vector-- so because every delta theta integrals in this subspace is orthogonal to this vector, that means that this vector is in a complementary subspace of the subspace 0 theta. So that's why this vector q theta needs to be in the row span of x so that for every vector delta theta orthogonal to the lowest span, their inner product is zero. So that's why this, it can be written as x transpose times mu because x transpose will be the lowest span. x transpose mu-- x transpose-- I think that's called v. x transpose v is the representation of a vector in the lowest span of x. So that's why this is the-- that's how we develop the KKT condition. The KKT condition was that you have to be in the lowest span. The gradient of q as theta has to be the lowest span of x and also has to be a feasible solution. OK. Cool. So this is some digression about KKT conditions. If you're not familiar with it, then the only important thing is that this is the characterization of the optimal solution i theta. And we can-- now it's just pattern matching, right? So this corresponds to this, obviously. And this one, really, this corresponds to equation two because equation two-- OK. That's what I'm-- OK, It's not trivial yet, but let's see that. So KKT tells you that the gradient of q theta needs to be something like x transpose v and the invariance or the differential equation tells us that-- let me just copy paste it. Let me just rewrite it. So theta alpha is equal to 2 alpha squared sinh. So first of all, let's rewrite this as-- so simplify this and rewrite it as v times v prime, let's say. Because what v is doesn't matter. And then you can-- I guess, let's also work on the Q side of things. The other Q side you can compute this. And sometimes when you derive the Q, we are verifying it, but actually what you have to do is to reverse engineer to do this in other direction. But if you just verify that, suppose you are given a Q for this one to prove it, you can find the derivative of Q will be just arcsinh 1 over 2 alpha squared times theta. This is a derivative Q. It makes sense because a Q is a sum of some function of theta i. And the derivative of Q is a-- each answer is the sum function of theta i. And so then you can see that this, if you plug in the theta alpha here to this thing. So arg sinh r squared theta alpha. This is equal to just the 4x transpose v prime. So basically that's why gradient q theta alpha is equal to minus 4x transpose v prime. And this satisfies the KKT condition. The form doesn't matter because v can be any vector, so that's why q alpha satisfies the KKT condition. Let me grab that. So it's the global mean. It's the global mean. I guess it's the last step. Satisfying the KKT condition means global mean. This requires the convexity of this program. The constraint is linear. It's convex. The objective you can verify still. It's also complex. It's something between L1 norm and L2. Both of them are convex. Any questions? So if there's no questions, I'm going to move on to the next thing, which is about classification problem. Yeah? I see many of you are starting like this. Proof, still. Like, this proof-- I don't know. I don't have-- the plan sounds very intuitive. So how do you prove something is the minimizer of a optimization proof? You have to verify it satisfies the KKT condition, I guess. That's probably more or less the only way to do it if you want to show something as the optimizer of some optimization program. But it's kind of magical why it just happens to satisfy the KKT condition. So of course, there's something that we can choose. We can choose the q to make it satisfy the KKT condition. That's something you can choose. But the magical thing is that other things all match up, like the form, the x transpose times something. All of those things are all matched up, and also you can somewhat-- in some sense, you can always work with each coordinate independently in this special case. See, that's also something that's maybe a little bit special to this special model that we consider. All right. OK, so now let's move on classification problem, and we are looking at separable data as we always do for classification problem. And here we are going to only discuss one result, which says that you could do gradient descent. It converges to a max margin solution. And this is actually-- doesn't require any initialization. It works for any initialization. So the only thing you need is gradient descent and some loss functions, which [INAUDIBLE]. And no regularization, you just compute gradient descent on the loss function. You run for a long time. You're going to converge to the max margin solution. So I'm going to have to, again, start with a setup. So now we have a data set xi yi i from 1 to n and xi ERd. And yi is a binary label, plus 1 minus 1. Question about the-- Sure. [INAUDIBLE] OK. So instead of assuming the bottom to be w squared-- OK. [INAUDIBLE] Mm-hmm. OK. Then to address the proof, there's not guide proof because it breaks down that-- like, when you compute the delta [INAUDIBLE]?? OK. Yeah. So the question is about the previous thing, and the question is about if you don't use w squared, you use w to the k. And this is a very good question, actually. This is exactly what the paper studied in the more technical part. And the short answer is that everything can still go through, but the eventual q would be different. So the form of your q would be something not L1, L2. I think it's something like-- it depends on the power. So I think if the power is P, I don't exactly remember, but I think it's something like 1 over P norm when alpha is close to zero. When alpha is going to infinity, I think everything is still the same. The NTK regime is not sensitive to this. And then technically, why'd everything go through? I think the reason why everything goes through is that, roughly speaking, you are only playing with this single dimensional function in some sense, right? So it won't be sinh anymore, probably. There will be some constant, some other function. But still this x transpose something is still there, I think. It's not changed. And so eventually you just have to add in your different Q to make everything work. And the Q is still-- only depends on the coordinates. You do something on each corner, you take the sum, but Q still had this form. So that's why it's still somewhat doable. OK. Cool. So going back to the classification problem. So this is our setup, and here we are only going to do the linear model even though some of this theory still works for nonlinear model with roughly similar technique and similar conclusion. And here we're going to have a loss function. So the loss function will be L hat w is the-- let's say we do the cross entropy loss, the logistic loss. I mean, cross entropy loss. Times hw xi. Where this loss is this logistic loss, which is log of 1 plus exponential minus 1. OK. Cool. And the first thing is that to get some intuition. So first of all, we have multiple global mean if separable data. So this is a premises for any implicit regularization buffer. If you don't have one global mean and you can converge the global mean, there's no implicit regularization buffer. But why is there are multiple global mean? This is just because you can always have an infinite number of separators, pretty much. Unless in a very extreme case you just happen to get stuck at it exactly. So for example, I think it's probably easier to draw something. So suppose you have some data points like this and you have so many different possible separators. As long as you have one, you perturb a little bit, it's still separate. So there's this, the infinite many w such that w transpose xi yi is bigger than 0 for every arc. So you have so many separators, and for every w for-- maybe let's say infinite number of w bar such that where w bar is unit. w bar is unit vector. This statement doesn't really depend on the log, so you can always scale it. So for every w bar-- for any w bar such that this, you can scale it. So if you look at L hat alpha w bar will go to 0 as alpha goes to infinity. So any scaling of this unit separator, if you scale it extremely, then you are going to get a loss close to 0. So basically you have so many directions in it, so you can go to infinity in different directions and still converge to a zero loss. So basically, in some sense, if you are a little sloppy about all of this infinity times w bar are global minimum of this loss function just because the loss function goes to zero at infinity. So the loss function-- maybe I should also draw this. The loss function looks like this. This is the Lt. When t goes to infinity, you get close to 0 because the zero loss. And what's inside-- what's t? t is y times w transpose xi, and this thing will go to infinity as you scale the norm of the constraint. So you have so many directions that you can find. There are so many global minimums. The question is which direction you'll find. If you don't use any-- if you just invoke a theorem about opposition, you know that it will find a solution with error close to zero-- with loss close to zero, but you don't know which direction it is. You still have a bunch of flexibilities there. Many directions can-- if you go to infinity in many directions, you can get the loss back to 0. So that's the question that we're actually going to address. And we're going to say that this actually converge to max margin solution. So let's define-- so the answer is max margin solution. Direction. So I guess let's define, maybe first, the marginalized-- the margin and normalized margin. So I guess we have defined a margin. And this cross many cases where-- in many cases, the margin is the minimum, this. And we also assume-- we always assume linearly separable. So this definition is only defined for cases where it's linearly separable. And normalized margin is defined to be-- we normalize this by the norm of w because otherwise you can make the norm arbitrarily big-- arbitrarily small. OK? So max margin solution is defined to be for all w. Which one give you the maximum normalized margin? And let w star be the maximizer. This is the direction of the max margin solution and with unit norm. Because if you only look at this objective, it doesn't depend on the scale because the scale has already normalized all. So we define w start to maximizer of every single nor, OK?. So basically we're going to prove that if you do gradient descent, you're going to go to infinity. But each will go to infinity, but only along the direction of w star. That's the theorem. So gradient flow. I guess here we're talking about gradient flow just because it's convenient, as we discussed. So converges to the direction of max margin solutions in the sense that I think we don't really exactly see the convergence in direction, we only see the convergence in the sines of the value of the margins. I think you really want to do the exact convergence and direction, of course it will be a little more work. So what we say is that the margin of your iterate will converge to the maximum possible margin, gamma bar. Right. So as t goes to infinity. And wt is the iterate at time t. So in the next five minutes, I'm going to discuss a little bit about intuition, why this is working. This intuition against why this is working and how do we improve it, and some kind of a mixture of both of these two. And then in the next lecture, I guess I would prove the thing more rigorously. So why this is going-- why this is working? So the intuition is that-- so I guess I have a few steps here. So step one, this loss function, L hat wt, is going to 0 by standard optimization arguments, which is not covered by this course. But I think you can believe it if your optimization is working-- if your optimization is working, then your loss should go to 0. And second. So this is observation one. And observation two, I guess the loss, this loss function, which we defined to be the logistic loss, right? Some like this. This loss function is actually close to exponential for large C. This is just because you do Taylor expansion. Log of 1 plus x is approximately x. That's why you can get rid of the log at 1. And this is actually an interesting thing. So you call it logistic loss, but actually it's closer to exponential loss. So logistic loss is close to exponential loss. So most of the proof, actually, we are only going to do the logistic loss. I think the proof, actually, I'm just-- sorry. I'm going to do the exponential loss. So in the proof, I'm going to just assume it's exponential. Even though you can still-- the small differences can be dealt with relatively easily. But the third observation is that because of one, the wt has to go-- the norm has to go to infinity. And the reason is that if you just don't go to infinity, you never make the loss close to zero now, right? This is just because if wt is bounded, however, let's say, it's bounded by B. Suppose it's always bounded. Then you can always bounds these y times w transpose xi by something like AB times the norm of xi. You have some bound. So this is bounded. And then your loss, L hat wt, this is bounded by exponential minus d times xi. Something like this. And this is bounded below by zero. And this contradicts with what? Right. So if your norm is always bounded, then your loss is going to be low by some number. The number is very close to zero, but still is bounded by some number which contradicts with the convergence to zero. So now it comes to the most important thing. So with all of this preparation-- so we know that the norm goes to infinity, and then suppose, let's say, let's only look at the final case, the later regime where wt is very big. So suppose wt q norm is really big. That's called q. It's very big. Then let's try to simplify the loss function and see where the loss functions are. So the loss function I can-- maybe let's remove the t just for simplicity. Let's just look at-- suppose you want to look at some w such that the w norm is very big. So L hat w is the sum of this logistic loss or exponential loss. We're not distinguishing them for now. Let's say this is roughly equals to the exponential minus yi times w transpose xi. And because the loss would be very close to zero, it's actually more informative to look at the log space. So if we took a log space of L hat w, then this is roughly equal to the log of sum of exponential minus yi times w transpose times xi. So this is a log sum exponential. I'm not sure whether this rung a bell to some of you, so this is basically soft max. So I'm going to claim that this log exponential is close to max of this minus this. Am I-- yes. So why this is the case? Let's do some, again, abstract derivation. I guess I'm running late. So if you have log-- oh, I guess-- sorry. I think I'm-- probably I should have another step, so let's first another step. So this is log sum exponential, but also I can try to-- I want to use the fact that w has a large norm. So let's get a normal wq in front. Again, yi times w bar transpose xi where w bar is equal to normalization w. So I know I'm going to claim that this is close to max minus qyi w transpose xi i over. So why this is the case is this-- I guess for those who are familiar with this, log sum exponential is kind of like a soft max. So if you look at log sum exponential of sum, I said q times ui. I'm trying to kind of abstract it by a little bit, right? So you have a q that is very large and the ui is something fixed. I claim that this is close. This is roughly q times the max of ui iEn plus something like little q. Something that doesn't impact-- doesn't depend on q as much as q goes to infinity. So when the q is very big, then this is really doing the max. And sometimes this is kind of like when you do a temperature in a soft max. If you make it big, then soft max becomes hard max. And if you do the-- then this is really-- and sometimes soft max. And if you want how to improve this, just say the sum of exponential qui is proven upper bound. The upper bound is that it's a log of-- replace each of these by the biggest one. Exponential q times the max ui. And this is only log plus q times the max of ui. And so the log is small compared to q because q will go to infinity and I need something fixed in this abstraction. And on the other hand, you just take one term. We just only keep the term where you have the max, then you get q max ui. You drop all the other terms. There's no sum anymore. Log cancels to be exponential. You get it. So basically, this log sum exponential is close to the max up to some factor log n. But this factor log n will be small if q goes to infinity, and that justifies this step. So unless you have this step, this-- what's going off here. So if you think that you're minimizing the loss-- you're minimizing a loss, so that's why you minimize a log loss as well. So minimizing loss is kind of like trying to maximize this, and that means you are maximizing the-- so you are trying to minimize this quantity. Minimize the quantity max minus q yi about transpose xi, which means it's the same as maximizing, which means that-- sorry. You are minimizing what you are maximizing, the min of q yi w transpose xi. So you just fill out this time. Minimizing the max of this is the same as maximizing the min, no? This is just literally the same thing without any-- it's not like you're switching min and max. It's just really the sign. You have the minus sign. Maximize something is the same as-- the max of this is equal to the minus, the min qy of the transpose xi. And then you can put this minus also to minimize it. OK? All right. So basically you are maximizing the margin. That's what this is. So if you do this approximation, then you have maximized margin if q goes to infinity. And next time we are going to make this more formal with a little but-- with essentially the same situation, but the proof would be more clean. It's not exactly like this. It's not like you're dealing with errors. It's going to be a very clean proof. OK. I think that's-- yeah. That's all for today. Thanks. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_12_Nonconvex_optimization_Nonconvex_opt_for_PCA_matrix_complexion.txt | OK, cool. So I guess let's talk about the materials today. So I guess last time we have talked about some of the kind of the bigger questions, the conceptual kind of bigger questions in deep learning theory. And today, we are going to start talking about the optimization perspective in deep learning for two lectures. And here, I guess I'm going to explain what optimization landscape means. It really means the surface of the loss function, but I guess you will see. So we are going to introduce some very basic things about optimization, but the main focus is actually not about how do you update, how do you design algorithm. The more focus is to analyze what the functions you are optimizing look like so that you can use some trivial or some standard optimization algorithm for it. So you don't need any background about optimization. You probably need to know what gradient descent is. I'm going to define gradient descent again, but you probably need to know what the algorithm is, optimization algorithm is. But there's no any kind of concrete requirement about the details like what momenta look like, what stochastic gradient descent exactly is. You don't really necessarily have to know them. OK, cool. So I guess the question we are trying to address-- just to quickly review to come back to the last lecture. The bigger question we are trying to address here is that why many optimization algorithm. So many optimization algorithm are designed for convex functions. But why they can still work for nonconvex functions? So why they can still work and actually, pretty well in practice for nonconvex functions in deep learning. Note that it's not like these algorithms, like gradient descent or stochastic gradient descent, can work for all functions that you may optimize. Definitely, there are many functions that they cannot optimize. And there are these examples in many areas of research. But in machine learning, typically, people assume that you can-- people assume and also somewhat observe that you can optimize your function pretty well even though the function is not convex. Of course, even in machine learning, there are atypical case or outliers, whatever you call it, especially if your parameterization of your model is very complex or kind of somewhat weird. So you could face some difficulties. For example, one simple example is that, if you have a very deep network, like a feedforward standard deep networks, then it's actually pretty hard to optimize. Because sometimes you have vanishing gradients. Sometimes you have exploding gradients, so on and so forth. However, some of these are solved by changing the architecture, which changes the optimization landscape. Anyway, so the bottom line is that, in most of the cases, people observe that nonconvex functions in machine learning can be optimized pretty well by gradient descent or stochastic gradient descent or their variants. And we are trying to understand why we can optimize reasonably well. So that's the question. And maybe just before talking about more details, let's first quickly review kind of like what gradient descent is just in case. This is very quick. I'm just going to define some notations here. So suppose g theta is the loss function. Like I said, I'm using g here just because I want to use a generic letter instead of use l, right? So l probably should be a better letter, but here I'm using something that is more generic. And the algorithm is just something like sets 0 is some initialization. And you have something like theta t plus 1 is equals to theta t minus eta times the gradient of g of theta t. This is gradient decent. And you can have stochastic versions of it. Many of you probably know them. And I'm going to list a few facts just to kind of motivate the discussions here. So when we're looking at nonconvex functions, maybe let me draw a nonconvex function, something like this, right? And the first fact is a fact, or it's just some observation. So maybe let's call it observation. The first observation is that so GD cannot always find local mean or global minimum, right? This is for continuous functions, right? This is kind of obvious. Because depending on where you initialize, depending on how the function look like-- for example, in this case, suppose you initialize here. And gradient descent will go rightward. And maybe it overshoots a little bit, and they go back, so and so forth. But at the end of the day, it converges to this local minimum if your gradient is somewhat small, right? So it will stuck at this local minimum and stay there. And even you have stochasticity, if your stochasticity is not big enough, you are not going to go to another local minimum-- or go to another global minimum, right? So clearly, you cannot hope that gradient descent work, you know, in the worst case for all possible nonconvex functions. So observation two, actually, this is a theorem. So finding global minimum of general functions, general nonconvex functions, is NP-hard. I assume some of you have statistical backgrounds, who are not exactly familiar with NP-hard. It doesn't really matter. This is just really saying that it's computationally intractable to find global minimum. But just to clarify what does that mean, that means that there exists a function that you cannot solve. It doesn't really mean that-- so basically, this is only saying that you cannot, in polynomial time, solve all possible functions with gradient descent or with any algorithm. But it doesn't really mean that there is no subset of functions that you can easily solve, right? So for example, a convex subset of functions can be solved in polynomial time. OK? And the observation three-- and obviously, this is opposite observation. So gradient descent can solve convex functions, as I said. And I guess the observation four is that objectives in deep learning is nonconvex. This is probably not entirely trivial. It's almost trivial, but not entirely trivial. So you have to-- it's probably kind of trivial to see that you cannot prove that they are convex, but you probably need a little bit of calculation or some kind of constructions to see they are not convex. And generally, they are not convex just because there are so many nonlinearities. And most of the convex functions we know are somewhat kind of simple functions, like a linear function composed with the convex loss, for example. And as soon as you go beyond two layers, it's not convex. OK? And observation five, I think I mentioned it. Sort of like, gradient descent does work. Gradient descent or stochastic gradient descent does work, so finds-- let me be precise about this-- approximate, or even sometimes you can claim it's almost exactly, global minimum of loss functions in deep learning. Of course, this is not a 100% rigorous statement because it depends on which loss function you are talking about, what [INAUDIBLE] what you have, and what architectures you have, so and so forth. But I'm just saying, for most of the cases in deep learning, stochastic descent or gradient descent seems to work pretty well. And why they are finding the global minimum? So you know that because you know the loss function is nonactive, right? So suppose you run ImageNet or you do some kind of vision experiments. And you know the global minimum has-- at least the loss function is always inactive. And you can see how small the loss SGD or GD can get you. And often, the loss function is pretty small, something like y minus 2 or something like that. Or depending on whether you use regularization sometimes it could be y minus 4, y minus 5 depending on the situation. So you kind of believe that you get at least an approximate global minimum. Cool. So what's going on here? So it sounds like there are some positive results, empirical observations. There are some negative results about the NP-hardness or the intractability of optimizing nonconvex functions. So the way that can reconcile this is really just that the lower bound, the impossibility results, is about worst case functions. And actually, we are not optimizing worst case functions, right? So I guess, in my mind, the kind of view is that you have a family of all functions, right? So in this family, the functions are super hard to solve, right, so like very hard functions. And there's also a small subset of functions that's called convex functions. These are not, we call them convex. These are just convex functions. And these are easy to solve, right? So gradient descent can solve them. But actually, we didn't identify all the functions that we can solve. There are actually more functions than convex functions that gradient descent or some other algorithms can solve. And that's a slightly larger family in between. And today, we are going to talk about these kind of functions. So of course, we cannot identify all the functions that are benign enough for us to solve, but we are going to identify a subset of the bigger than convex subset. So these are nonconvex. But some would have benign properties, some benign properties. And kind of the task is to figure out what properties that make them somewhat nice and easy to optimize. All right, so here's our plan for this lecture and maybe the first part of the next lecture. So the first step is that we can identify a large set of functions, a larger set of functions that SGD or GD can solve up to global optimality. And two, we're going to prove that, as for some special-- some of the loss function, in machine learning problems belongs to this set we just identified, this larger set of functions we just identified. So most of the effort will be spent on the second bullet. The first bullet, you do need to show why I SGD can solve this set of functions. But I guess I would tell you what people can show in the first bullet, along the line of first bullet. But I wouldn't tell you anything about all the details. The results, they're actually, in some sense, kind of intuitive. But they do require a lot of backgrounds to talk about details, right? So you need to know a lot of things about how to analyze these iterative optimization algorithms. So that's why we don't focus on that. We mostly focus on the second part, which is more about the statistical properties of those functions used in machine learning. OK, cool. So the very basic idea is the following. So it's very simple. So basically, you say that-- so you know that gradient descent can find local minimum. This is somewhat easy to believe. But actually, there are some caveats about it. Maybe I'll just start here just to remind you there is a caveat. But you kind of believe that, roughly speaking, gradient descent can find the local minimum, right? And suppose if you know in addition that all local minimum of f are also global, then this, too, means that GD can find global minimum, right? So basically, the first statement is about the first-- so basically, the set of functions that we are going to identify to be solvable by GD and SGD is just a set of functions with the property that all local minimum are also global minimum. And then we need to characterize when you show that the functions we are actually using in machine learning have this property, right? Of course, not all problems have this property. We're going to show some very, actually, simple cases where we can prove this. But I guess, as I mentioned, there is some caveat about whether you can even converge to a local minimum. So this is actually somewhat nuanced. So I want to be as clear as possible about that. So I'm going to formalize this converging to a local minimum. But I'm not going to prove any of the theorems here. So the next part is the convergence to local minimum. So I guess let me start with some definitions to formalize it. So let f be twice differentiable. For simplicity, sometimes you can extend this to maybe only differentiable, maybe not twice differentiable. But just for simplicity, let's say f is twice differentiable. And what's the definition of local minimum? I guess you have seen this in calculus class, right? So x is a local min of the function f if there exists an open neighborhood-- let's call it n-- around x such that, in this neighborhood n, the function value-- I'm using a lot of text here just to make it easier to understand. Alternatively, I can also use just math, but let me use the text. So in the neighborhood n, all the function values in neighborhood n are at least fx. So basically, fx is literally one of the minimum in this neighborhood. So that's the definition of local min. And so I guess, from the calculus class, you probably know that, if x is the local min, it means that the gradient of fx is 0. The gradient square of the Hessian of f is PSD. So these are necessary conditions for being a local minimum, but not vice versa. So it's not like, if the gradient is 0 and the Hessian is PSD, then you are local min, right? So why? I guess a simple example here, I cannot-- it's easy to come up is that you just say maybe-- actually, I'm not taking the simplest one just for some reason because I'm going to use this again. So let's say suppose you have f of x1, x2, which is something like x1 squared plus x2 cubed, OK? And in this case, x1, x2 is 0. The origin satisfies the gradient 0 and satisfies the Hessian is PSD. Actually, the Hessian is 0, right? So that's why it's PSD. Just because if you take the second order-- wait. The Hessian is not 0, sorry. The Hessian is PSD, right? Because the Hessian is a matrix. I guess, in one direction, it's 2. And the other direction, it's 0, it's PSD. Cool. All right. But actually, it's not a local minimum, as you can see, right? Because if you change x2, you can make the function smaller in the neighborhood, right? Because x2 is the cubic function, right? So you can always make it smaller than 0 in the neighborhood of 0, OK? So basically, from this example, you can kind of see that what happens is the following. So why this is a problem, right? So fundamentally, what's the problem here? The problem here is that, when the gradient of fx is 0 and also the Hessian is not strictly positive semidefinite, it's just a positive semidefinite. So suppose Hessian vanishes in some direction. So if Hessian vanishes in some direction, so that means, in that direction, you don't have the first order descent. You don't have the second order gradient. In that direction, you are pretty flat, right? So that makes it tricky because then the higher order gradients start to matter. Derivatives starts to matter, right? Because if your second order derivative is actually non-zero, it's curved, then the third derivative is always cubed by a second order derivative as long as your neighborhood is small enough. However, if your neighborhood-- but if your second order derivative is literally 0 in some direction, then a third order derivative starts to matter. So that's why local minimum is not only always a property of the first and second order derivative. And when you look at this-- and once it becomes about the third order derivative or fourth order derivative, things becomes much more complicated. And actually, if you look at the hard instance of the NP-hardness or the intractability results about optimization, once you-- so basically, all the hard cases happens when you have to deal with the high-order derivatives, like fourth order derivatives. And then, basically, I know this is probably not making that much sense for all of you if you're not familiar with how you prove NP-hardness. But basically, you can invite hard instances of some kind of set instance to the fourth order of derivatives so that knowing whether the first order derivative is a positive operator is equivalent to knowing whether you solve the set problem. Anyway, so this is only for those people who know a little bit about the computational intractability results. Anyway, but the intuition is that higher order derivatives is just hard to deal with, especially higher than fourth order. And there's a theorem, which is the following. So the theorem is that verifying if x is a local minimum without any assumption of the local minimum of f is actually NP-hard, so that finding a local minimum is also NP-hard. So I've told you that finding a global minimum is NP-hard. But actually, finding a local minimum is also NP-hard. And this is the caveat I was referring to. Because in most of the cases, if we talk to someone at random, like if you talk to me about research, you know, you're making-- we will think about finding a local minimum is easy, right? In general, that's the right conclusion. But it's not exactly true, right? So we have to consider these kind of pathological cases, which makes things harder, right? So how do we proceed? So if finding a local minimum is hard, then this plan doesn't work, right? So the way to go beyond it is that there is a way to also remove some of the pathological cases as well so that you can find a local minimum in polynomial time. And then we can execute our prime. So this is what will happen. So here is a condition called strict set of conditions. And if you certify the condition, then you remove those pathological cases which requires high order derivatives. So-- sorry. Strict-saddle condition-- so in some sense, I guess I'm not sure whether this makes sense before I define it. But generally, you are basically saying that you want to rule out-- you want to say that you assume your function doesn't have this kind of somewhat subtle possible candidate of local minimum, right? So every point, whether it's a local minimum or not, can be told from examine only the first order gradient and the second order gradient, second order directives. So there's no set of cases in your function. So how do we formalize this? This is strict-saddle. I guess the paper to cite is Lee, et al. By the way, I think I wrote a book chapter about this kind of optimization thing for our book. So I can send that to the person who take the Scribe notes. And that probably help you to have some references. But the materials are not exactly the same as the book, so you still have to do the Scribe kind of from scratch in some sense. OK, cool. So the definition of strict-saddle, I'm citing this paper just because it's not like every paper is exactly the same definition. The very, very original paper that introduced this term and this notion is by Rong Ge, et al. in '15. So that paper has a slightly different definition, but I think this one, the definition in Lee, et al., is a little more kind easier to use for the future of research. So I think people are somewhat converging to this. So here is the definition. So we say f is alpha, beta, gamma strict-saddle if, for every x in RD satisfies one of the following. So the first one is that, for some of the x, it just satisfies that fx, the true norm of x is larger than alpha, right? So you see that, of course, some of the x satisfies this, right? So we have gradient space. So these points are not local minimum, right? So these are not stationary points. They are not local minimum. OK? By the way, by stationary point I mean first order stationary, meaning those points with gradient 0, right? So if you can satisfy number one here, you cannot be a local minimum. You cannot be a stationary point. And alpha, beta, gamma is not positive numbers. And the second thing is that the lambda min of the Hessian at x is less than minus beta. So if you satisfy this, you cannot be a local minimum because your Hessian is not positive semidefinite, right? So you cannot be a local minimum. And in some sense, you can think of alpha, beta, and gamma to be something super small or even close to 0. We just require them to be strictly bigger than 0 just for technical purposes. And the third condition, the third possibility, is that x is gamma-close to a local minimum, to a local min. Let's call it x star in Euclidean distance. The distance in matrix here, probably not entirely important because we are not going to be very quantitative about this. So everything is polynomial. So it's not that important. So basically, this is saying that number one rules are some kind of local minimum. Number two rules are some other kind of local minimum. But note that number one, number two doesn't rule out all possible local minimums because there are even local minimum which has gradient 0, which has 0 gradient and positive semidefinite Hessian. So there are also-- my bad. So one, two doesn't tell you exactly whether a point is local minimum per se if you don't have this assumption. Because you can have a point that does not satisfy one or two. For example, if you have a point with gradient 0 and Hessian is PSD, right, it doesn't satisfy one, two. But it could still be not a local minimum, right? That's the pathological case. And this assumption, this definition, is basically saying that, if you don't satisfy one, two, then there's no this pathological case. You have to be close to a real local minimum, right? So that's what this strict-saddle condition is saying. Maybe let me take pause for a moment, see whether there's any questions. Actually, I'm looking at the query bar. I know it's empty. But if you have any questions, feel free to ask. Is alpha and beta positive? Yes, that's right. So this definition only makes sense when alpha, beta, and gamma are positive. I think-- right, yes, exactly. So alpha, beta-- cool. Any other questions? The third strict-saddle condition sounds hard to check. Yes, that's a great point. So you cannot check. There is no way you can check whether empirically your function satisfies this condition. So I think it's even-- I'm not 100% sure about this. But I think you can prove that, if you are just given an arbitrary function, differentiable functions, you should then be able to check whether it satisfies strict-saddle, right? It should be as hard as finding a local minimum in some sense, right? So this condition is not something that you are supposed to check numerically. This is something that you are supposed to prove theoretically in some sense if you can. Of course, I know in many cases you cannot, but nobody can do. But I think the condition itself is not supposed for people to numerically check. That's a good question. OK, cool. And by the way, always feel free to ask questions just even as I'm speaking. OK, cool. So let's see, what's-- OK, we have the condition, right? So now, what you can do is this, right? So here's a theorem. The theorem is somewhat kind of like informal just because I'm not-- it's pretty formal in the sense that all the bounds are correct. It's just that I wouldn't specify some of the details. So suppose f is alpha, beta, and gamma strict-saddle. Then many optimizers, for example, GD, SGD, if you use the written word correctly, and many other articles, like cubic regularization, I guess many algorithms can do this. So far as can converge to a local min with epsilon error in Euclidean distance in time poly d-- d is dimension-- 1 over alpha, 1 over beta, 1 over gamma, and 1 over epsilon, all right? So this theorem is very coarse-grained. Of course, different optimizers have different convergence rate. But at least for the purpose of this course and this lecture, we are not interested in which one is faster. We are mostly just interested in whether it's polynomial time versus exponential time. And the point is that, if you have the strict-saddle condition, then you can converge to a local minimum. You don't have the pathological local minimum, pathological cases, then you converge to a local minimum. You can converge to a local minimum in this polynomial time. All right. OK. So by the way, I think just to explain the name of strict-saddle, I think this is just because the pathological case is a saddle point, right? So when you have these kind of cases where the gradient is 0 and the Hessian is PSD, but not strictly positive semidefinite-- so you have some direction where you can potentially have a negative curvature. You can potentially have a flat curvature, but potentially you can have third order derivatives. And these are kind of saddle points. So in some sense, this explains the name for this, right? So in other words, the condition is saying that, if you are a saddle point, you can tell it's a saddle point from the negative curvature. What is the third optimizer inside the parentheses? That's a good question. This is called cubic regularization. I guess there are so many others. Cubic regularization is one of the early work in 2006 by Nesterov. But there are many other optimizers. I published a paper on this. Many other people published papers on this. I think I can add more references in the Scribe notes, in the final Scribe notes, to cite some of the recent works. All right. OK, cool. All right. And now, we can converge to a local minimum with these conditions. And now, suppose you make additional assumption to say that all the local minimum are global, then we are good, right? So basically, the next theorem is trying to say that-- are global and you have the strict-saddle set of condition, then this means that optimizers can converge to global min. So here is a theorem that formalizes this. I guess I'm writing it a slightly different way not because-- just in some sense, I unpack it a little bit. I thought that this either provides a slightly different way of thinking about this, or it's just more explicit. So basically, you say that you assume the strict-saddle condition, but let's rephrase all local minimum global strict-saddle condition like this. So you say that there exist epsilon 0, and tau 0, and c such that, if x in RD satisfies, the gradient is small at the epsilon and the Hessian is larger than minus tau 0. So basically, this is a-- what does these two conditions mean? This is saying that you are a somewhat approximate local minimum, right? So you have not passed the sanity check for local minimum. But of course, you haven't passed the-- you cannot rule out the pathological cases, but at least you pass the first order condition. The first order gradient is small. You somewhat pass the second order condition approximately because the Hessian is somewhat big, almost kind of larger than 0. And then this is saying that, suppose you pass these two condition, then x is epsilon to the power of a c close to a global minimum. The power c is just to relax the conditions so that you can have square root epsilon-- for example, close or something like that. So then, actually, it's close to a global minimum of the function f, right? So this condition is just a slight different way to say that you have all local minimum global and strict-saddle together, all right? And then I know this condition. Then optimizers-- again, the same set of optimizers which can converge to local minimum. Many optimizers can converge to a global min of f up to, say, delta-error and Euclidean distance in time poly 1 over delta, 1 over tau 0, and d. All right, so it's not exactly the thing that we did for the strict-saddle. But if you think about it, it's basically the same statement. OK. Anyway-- so cool. So we are basically done with the first part, so about identifying the subset of functions that are easy to optimize. But these are all local minimum, global minimum functions. And next, we are going to show some examples where these kind of properties can be proved rigorously for machine learning situations. But these examples are pretty simple. They are not deep learning. So these are still roughly the best that people can do in some sense. So this is just to give some examples for which these kind of properties can hold. OK. So next, we have two examples. The first one is the PCA or matrix factorization. And fundamentally, this is more or less the same as the linearized network even though, if you do linearized network, there is a little bit more things to do beyond that. And the second example I'm going to give is matrix completion. This is an important machine learning question by itself as well, right? So before deep learning, this was one of the most important topic maybe in machine learning, like especially if you think about nonlinear cases. And now, still I think it's used in the recommendation system. So we're going to talk about that. OK, cool. So any questions so far? I guess let's talk about PCA first. So I guess I'll maybe more precisely say matrix factorization. So we are assuming that we are given a matrix M in dimension d by d. And we want to find the best-- let's talk about the rank one case. So we want to do the best rank one, so best rank one approximation of the matrix. I think probably from other classes you have know that the best rank one approximation is basically the Eigendecomposition or the singular value decomposition of the matrix here. Just for simplicity, let's also assume this matrix M is symmetric. Let's also assume it's PSD just for simplicity. So in this case, the best rank one approximation is basically the eigenvector times eigenvector transpose up to some scaling. So this is not a hard problem, right? So you can just run any eigenvector solver to find the eigenvector and then scale it properly. Then you get the best rank one approximation. But just for the purpose of this class, we are interested in this nonconvex objective function, right, which is literally interpreting this in the most straightforward way. So you say I'm just literally finding this back best rank one approximation. I know the best rank one approximation should be symmetric. So I'm just trying to find M. Let's find a vector x such that M minus xx transpose in Frobenius norm is the smallest, right? So you are approximating M, and the matrix M was the matrix xx transpose, right? So xx transpose is the rank one approximation. And you measure the error by Frobenius norm. OK? And this becomes a nonconvex objective function because you have a quadratic term here. And then you take the square of that. It becomes degree four polynomial, and it's nonconvex. Right. And our goal is to show that, even though it's nonconvex, all local minimum of this g are global minimum under the assumptions that we have mentioned, so like rank one, PSD, so and so forth. OK? So I guess I think I forgot to pass a figure here. So if you look at the one-dimensional case, this function looks like this, so one dimension. So d is 1. So d is 1. Then you just have a scalar, m minus x squared squared. This is our function, g of x. And you plot this function. This function looks like this. And there are two local minimum. And they are both global minimum because there is some symmetry here. And if you have a higher dimension, it becomes a little bit more complicated because you have actually, in some cases, not only necessarily one, not necessarily only two local minimum. Or I guess if it's rank one, there are only two local minimum, but it looks more complicated. But generally, you have some kind of rotational kind of symmetry here to make this happen. OK, so let's talk about the proof. So how do we prove this? So as you can imagine, the proof is pretty simple. The plan is very simple. You first find out all stationary point, the first order stationary points. And then you find out all local minimum, and you prove that they are all global minimum. So basically, it's just more or less like we solve all of these equations and see what are the possible local minimum you can have, right? So let's firstly use the stationary point, a gradient condition, right? So gradient of x is 0. And what is the gradient of g of x, right? So gradient of g of x, I'm not going to give a detailed calculation here. But believe me, this is equal to minus this times x. I think this is actually a question in homework 0, maybe question 2 or question 3 on homework 0, about how to compute a gradient. And you have the gradient. And then you say, let's write out what this 0 means, right? So this means m times x is equal to 2 norm fx squared times x, right? Because the three things together, the last two things, becomes the 2 norm of x squared. And that's a scalar. You can switch the sign. You get 2 norm of x squared times-- you can switch the order. You get this. OK, cool. And this is a scalar. And this is a matrix vector application. So basically, this is saying that x is an eigenvector. So x is eigenvector, and x squared corresponds to eigenvalue. So basically, you have to-- maybe one way to think about this is that you first find out the unit eigenvector. So the eigenvector doesn't have a scale, right? So you first find out the unit eigenvector. Let's say the unit eigenvector are v1 up to vd. And then if you have unique eigenvectors, so suppose, let's say-- let me just specify all this. So this part just follows some intuition. So suppose eigenvalues are distinct even though we don't have to assume this. Then you have unit eigenvector v1 up to vd. And you have lambda 1 up to lambda d. And these are the eigenvalues. And then, basically, all the stationary points are of the form that x is equal to plus minus square root lambda i times the eigenvectors. Because if you measure the norm of x squared, then you get a lambda i. And that is the corresponding eigenvalue, right? So these are all the stationary points the first order stationary points of this problem. And now, let's look at which of these is a local minimum. And then we say OK, all the local minimum are global, right? So ideally, we just want to say that only vi, the v1 thing, is the local minimum because that one is also a global minimum. So square root lambda 1 v1 is the global minimum. OK? So how do we do this? And also, we don't necessarily want to assume all the eigenvalues are distinct. So there's the small thing to be done regarding that as well. So let's compute Hessian, right? So we need to use the Hessian. So I think this is, actually, a typical question I got when people start to think about these kind of optimization problems. Because the typical question is that, how do you find the Hessian? How do you write down the Hessian sometimes, right? The Hessian sometimes can be very hard to be written down. So here, it's actually not that hard because the Hessian is in dimension d by d because you have d parameters. Sometimes your parameters is a matrix, and the Hessian becomes a fourth order tensor. And it's kind of very complex to be even just written down to just write down the Hessian. So here is a kind of a very useful trick and which actually also has some fundamental reasons that this is useful. So the useful thing is that, if you look at the quadratic form regarding the Hessian and you look at v transpose Hessian v or v in the part that was Hessian times v, this is the quadratic form related to Hessian. And this is much easier to compute. So why this is much easier to compute? Because the methodology is the following. So this is, in some sense, in the homework solution of that homework that asks you to find a gradient of this function, right? So I guess we have the homework. Homework 0 has this question, to ask the gradient of this function. The same methodology also applies here when you talk about the Hessian. So the methodology, I'm not going to go to all the details. But roughly speaking, what you do is the following. You say, I'm going to consider gx plus epsilon. And I'm going to Taylor expand this. So I'm just going to iteratively Taylor expand here. Whatever this g is, g needs to have analytical form. Maybe it's a composition of several functions. You just iteratively expand it, Taylor expand it, into something like g of x plus some epsilon times some vector plus some epsilon times some matrix. So, OK, I guess maybe I shouldn't even write this. So g of x plus some linear term in epsilon plus some quadratic term and so on and so forth, past the higher order term. And then if you have this, then this basically corresponds-- if you replace epsilon to v, replace epsilon by v, then you get v dot g square gxv. So I guess I'm not sure whether this is too abstract when I say this. But if you didn't get exactly what it means, so just look at the homework 0 solutions. It's basically doing this. And so this is a very simple way to compute the Hessian, the quadratic form over the Hessian, without writing the Hessian as a complex matrix or tensor. So if you apply these kind of techniques, you can get the Hessian like this. So the quadratic form of the Hessian is equals to something like this. So I guess you can see that, here, I'm not writing it-- I still don't necessarily have exactly a representation or a characterization of what the Hessian is, right? I'm only writing the quadratic form as an analytical formula of x and m and v, so and so forth. So in this case, from this quadratic form you can figure out what the corresponding matrix is. You can write it as a matrix multiplication, right? That's OK. But for many other cases, actually, it's very hard to write out that matrix of the Hessian, right? But the quadratic form is just some analytical formula. And as you will see, actually, the only thing that matters is the quadratic form. Because, anyway, even you are given, for example, a Hessian, which is kind of complex, there's not much things you can do with it. So pretty much you'll still be looking at the different specific quadratic form. So we have to have this quadratic form. And we know that the Hessian is larger than 0 is equivalent to that, for every v, the quadratic form of the Hessian is larger than 0, OK? So here is what I mean why you only care about quadratic form. This is just because you only care about-- basically, if you plug in different v's, you get the same thing. But which we want to plug in? Shall we plug in all of the v's, or shall we just use some specific v's? It turns out, in many cases, you only care about a few special v's because some of v's are much more informative than the others. So you want to choose some informative v's to evaluate this formula so that you get some important information about what x can be. Because at the end of day, you care about what x is because you are using these to pin down what are the local minimum. So what are the informative v's? So it turns out that the v's that are informative here is the top eigenvector. How do you know this? It requires some intuition. It requires some trials and errors, so and so forth. But I guess it also probably makes sense because the top eigenvector direction is the global minimum, right? So you try whether you can move in the direction of the global minimum to see whether your function value can increase your second order sense. And to some extent, it's intuitive. To some extent, it's just trials and errors. But anyway, so v is equal to v1 is a good choice. Because if you plug it in, you get v1 times the Hessian of x times v1. When you plug into this formula, you get 2 times x, v1 squared minus v1 transpose Mv1 plus x2 norm squared. And you say this is larger than 0. And you can probably see why this is informative because this term is negative. So it's the hardest test, in some sense, because the negative term is maximized. And now, let's look at what we can get from this equation. So realize that we don't care about the Hessian for every point, right? So we only care about the Hessian for the stationary point, in the first order stationary point. Because only those points can be possibly our local minimum. So we want to look at x as a eigenvector because we are only filtering local minimums from the stationary points. So because x is an eigenvector, we have two cases. So the first case is that x has eigenvalue lambda 1. So x has the top eigenvalue. Then x is just a global minimizer. And we are done because we know the global minimum is basically using the top eigenvalue to fit, right? By the standard results in PCA, you know that the best one-to-one approximation is the top one eigenvalue-- eigenvector with the right scaling. So the second case is that x has eigenvalue, let's say, lambda, which is strictly less than lambda 1. So it could be the second eigenvalue. It could be the third eigenvalue, so and so forth, right? And then because x is an eigenvector and also the eigenvalue of x is orthogonal to the eigenvalue-- the eigenvalue of x is different from the eigenvalue of v1. So you know that x is orthogonal to v1 because different eigenvectors with different eigenvalues will be orthogonal. There is no guarantee that two eigenvectors are always orthogonal because they could have the same eigenvalue and they are just in the same subspace. But if they have different eigenvalues, then they have to be orthogonal. So that's why x1 is orthogonal to b1. And then if you evaluate 2, this equation 2, the 2 means what? So 2 means that the first term goes away. So you get just x2 norm square is bigger than v1 transpose in Mvi. And recall that the v1 transpose Mv1, this is just lambda 1. And x2 squared, recall that that equals to lambda. Recall that, by the first order condition, you have that, here, the x2 norm squared is the scalar that corresponds to the eigenvalue of that vector. So basically, we have lambda is bigger than lambda 1. And we have a contradiction because this is contradictory with the assumption that lambda is less than lambda 1. So write that. OK, any questions about this? So, guys, maybe just a very quick summary-- so basically, this is saying that, if x is stationary-- by stationary point, it always means first order stationary point. So I'm not going to clarify that in the future. So if x is stationary point and is x is not global min then moving in v1 direction-- so moving in v1 direction wouldn't change our function very much. Because you have stationary point, that means your point is flat. So changing in v1 direction wouldn't change it by a lot. It would lead to a second order improvement. And that's why it's not a local minimum. Because if you are local minimum, moving in v1 direction shouldn't give you any second order improvement either. So that's basically the gist of the analysis. All right-- so cool. OK. so now, let's talk about matrix completion, which is kind of like an upgraded version of PCA. And as I said, this is actually a pretty important question in machine learning. So let me define the question first, and then I can briefly talk about why people care about it. So let's also talk about rank one version just for simplicity. So question is that, so let's say suppose-- and also, we are assuming that we have-- I guess, we assume the ground truth matrix M is also a rank one matrix. Let M be a rank one matrix, and symmetric, and PSD just for simplicity. And so in other words, you can assume M equals something like zz transpose, right? And z is kind of the ground truth. And z is in dimension d. And the setup is the following. So we are given random entries of M. So we pick some random indices of M, and you review the corresponding entries. That's the only thing you know about M. And then the goal is to recover the rest of the entries, right? More formally, so you say that there is omega, which is a subset of the entries, subset of the indices of d times d. And you say this is actually a random subset in a sense that every entry is chosen including omega uniformly randomly-- is included in omega uniformly randomly-- independently with probability P. So each entry is included with some probability P. And you reveal-- so what we see, we observe, is so-called p omega of M. p omega of M is basically-- OK, so let me define P omega of M. So P omega of A is the matrix obtained by zeroing out every entry outside omega. So you get this matrix A. And everything that is not in omega, you'll make those entries 0. And you are given this sparse matrix P omega of A. So we observe P omega of M. And our goal is to recover M. And why people care about this question a lot in the past, one reason is that it has this relationship with a recommendation system. So here, I'm assuming it's a symmetric matrix, so and so forth. But if you relax those a little bit-- which doesn't necessarily change the essence of the problem. So suppose you think of we have a matrix. And in one side, the columns are indexed by the users. So let's say suppose this is a matrix that Amazon maintains, and the other side is the item. And each entry is the rating of the user to the item. And every user probably have an opinion about every item, right? Either they like it or not, so and so forth. But it's not like every user buys every item, for sure. So every user only buys a very small subset of the item. And that's why you only see some entries in this matrix, right? Amazon only sees some of the entries. And Amazon wants to understand what each user's preference is, want to know that each user likes which item, right? So the Amazon has an incentive to just fill in the entire table. I'm just only using Amazon as an example, but the same thing probably applies to many other situations where you have to recommend items to users. So that's why you have to recover all the rest of the entries to serve the users better in the future. And that's why this problem was important. And it's still kind of important these days, but I guess there are many already existing methods to solve this. And the most used method to solve this is basically nonconvex optimization to find this ground truth matrix M using the fact that you have a low rank structure. Because how come you can recover the rest of the entries? If there's no other structure in this matrix M, there is no way you can recover the other entries because they can be arbitrary. So that's why you have to assume that the matrix M has some low rank structure or some other structures. So maybe just to give you a quick kind of sense about how does this structure matters here-- so if you count the number of parameters, we have d parameters, right, to describe a rank one matrix of dimension d by d because you can just write it as xx transpose. And so how many entries you observe should be probably bigger than the degree of freedom here, right? So the number of observations, it is roughly equal to p times d squared because each entry is observed with probability p. So you have p times d squared entries. And this should be bigger than d. If this is not bigger than d, it's unlikely it can work. So basically, that is saying that p is bigger than roughly 1 over d. And this is actually the regime we are going to work with, right? So we are going to work with the regime that p is much bigger than 1 over d by, for example, log factor or something like that. So that's the setting we're going to be in. And speaking of the objective functions, this is actually a pretty commonly used method in practice. So you just say I'm going to minimize this function that's called fx, which is defined to be that basically you have a parameterization called xx transpose. This is your target. This is the parameterization of a target matrix. And you want to say this matrix actually faced all my observations, right? So you are taking a sum over all possible observed entries because these are the only cases you know what the entries are. You know this Mij, and you minus this with xi times xj. And you take the square. So this is our prediction. This is our observation. And you take the square and take the sum over all the observed entries. And just to follow future notational easiness, actually you can write this as P omega of M minus xx transpose, right, because this is the matrix. You're looking at error matrix, right, and then you zero out all of those that you don't know, right? Because offset omega you have no information, you zero out all of those. And you take the sum of squares of the rest of the entries, right? So that's another way to write this function. And just a side note, I think, actually, there are many other methods that can solve matrix completion. So there are convex transition methods and so and so forth. However, those methods actually often have stronger guarantees. For example, they have tighter sample complexity bounds, so and so forth. But in practice, just because the convex transition takes too long time, people actually are using objective functions or methods like this. And then they just use gradient descent to optimize these kind of functions. And that's why it's kind of also practically relevant to analyze these kind of objective functions because they are, indeed, used in practice. All right, so our main goal is to prove that this objective function has no local minimum, all local minimum are global. There is one assumption that I have to specify, but it's not going to be used much in this lecture in the proof. Because we are going to sweep some of these things under the rug. But I do have to mention this assumption. It may not sound very intuitive. I wouldn't spend too much time on it, but just let me mention it. So this is called incoherence assumption. And this assumption is necessary. People know it. So I guess we assume, for example-- first of all, we assume the ground truth has norm 1. This is with all this generality, just which is for convenience fix of scale. And then after you fix the scale, you assume that the ground truth vector z-- so we call that [INAUDIBLE] zz transpose. So z is the ground truth. So the infinity norm is less than mu over square root d. A mu is considered a constant or logarithmic in d. So what it's saying is that, this factor z, the norm is 1. And also, the entries are spread out. You cannot just have all the mass concentrated on one entry. So the reason why you don't want that is because, for example, a counterexample is that, if z is just e1, then your M is just e1e1 transpose, which is just at the top left corner of the matrix. We just have a very, very sparse matrix. And top left corner is 1. And then there is no way you can recover this matrix unless you observe that top left corner. So basically, all bets are off. You have to see enough entries. So this incoherence condition is, in some sense, trying to rule out these kind of pathological cases. But I'm not going to talk too much about it. It's just for the formality. OK, cool. So this is just for the rigorous of the proof. So here is the theorem. And I guess I'm going to stop after I state the theorem and then prove it next time. So the theorem is that suppose p is something like poly mu and log d over d epsilon. Recall that we are in a regime that p is roughly 1 over d. And this is the same regime, where epsilon is something like larger than 0, kind of like a constant. And this is a poly factor in mu and also poly log in d, OK? So suppose p is on this order. And then we assume the incoherence. And then our local of f are when we are-- so actually, you can prove that they are all exactly global minimum. But for the moment, we only prove that they are of square root epsilon-close to either z or minus z. And z and minus z are clearly a global minimum because the error will be exactly 0. All right, so that's the statement. And also, just to mention, you can also have strict-saddle conditions. So you can also have strict-saddle conditions. You can also prove that. It's just that I didn't include it just for the sake of simplicity. And you do have to prove that to have the rigorous result. And if you don't prove it, you just prove that all local minimum are global. Sometimes you may get somewhat misleading results. So I think there is a paper that shows that actually, in somewhat weird cases, you can show very strong looking results. The results look very strong in a sense that all local minimum are global. But actually, the reason why they are so strong is because somehow, in that setting, you ignore the strict-saddle, which is problematic. All right, so I guess so the proof is obviously too long to cover in 1 minute. So I guess I'll leave it to the next lecture. I can take some questions if anybody has any questions. Otherwise, I think we are good today. OK, there's a question. Sounds great. So are there any network models where these are known to be hold? The answer is no especially if you look for a global property, like globally, all local minimum are global. I don't think we have any proofs for any real neural network models. I guess there is a proof for linearized network models, like all the activations are linear. And actually, in that case, if you have more than two layers, you don't have strict-saddle conditions. You have a lot of [INAUDIBLE] points. So basically, the short answer is that I don't think there are any real cases, like satisfactory cases, where we know how to prove this. I think there are for two-layer networks, if you assume some conditions on the input-- for example, if you assume that the input are linearly separable, then there is a proof for this. Yeah. And there are a bunch of other cases where you can have some partial results. Next week, in the next lecture, maybe the second half of next lecture, I'm also going to give another result, which is somewhat more general. It applies to many different architectures, but it has other kind of constraints. First of all, it doesn't really show exactly these kind of landscape properties. It shows that these kind of properties holds for a region, for a special region in the parameter space. And that's so-called NTK approach. I'm going to specify more details, but there are also other kind of problems with those kind of approach as well as-- I'm going to talk more about it next lecture. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_20_Spectral_clustering.txt | OK. I guess let's get started. This is the last lecture of this course. I guess we're going to continue with the spectral approach for clustering. So I'll provide some of the reviews of the last lectures. So last lecture, I think we did the stochastic block model, and one of the main findings is that if you do eigendecomposition-- so our goal was to do eigendecomposition on the graph G from the stochastic block model. And we have shown that if you do eigendecomposition on the average graph G, the expectation of G, then does give the heading community S and S bar, right? I think last time, we showed that the second eigenvector is something called u, which will look like 1 1 1 and minus 1 minus 1. And this is S and this is S bar. So basically, if you just take the second eigenvector of the expecting graph G, then you get the hidden community. And we have argued that suffices to show that the graph G and the expectation graph, expectation G are closing up in an operator norm. And this is because if you consider this equation, right, so you subtract the first eigenvalue from G, then what you get is that G minus the first eigen component is equal to this perturbation matrix plus the contribution of the second eigenvector. And if you take the eigendecomposition of this matrix, which is something you can compute easily, then you take the top eigenvector of the left hand side of this equation, then you are expected to find something close to u, as long as G minus expectation G is something small. Now how small is it? I didn't really formally do this, but essentially, you need this perturbation to be much smaller than the signal, right? So you need a perturbation in operating norm much smaller than the rank 1 signal in operator norm. And you can compute operator norm for the runtime signal very easily, which is sometimes p minus q over 2 times n. So basically, we are trying to show that the concentration, right, this is a concentration inequality because you are trying to prove that G concentrates around expectation of G in this factor norm sense. I love to show this proof. This is a little technical proof, but the proof is not very long and also it kind of relates back to what we discussed in lecture 3 or 4 where I guess you probably remember that I said that this concentration inequality is probably one of the most important thing for this course because this is the-- that if you pick one technical tool in statistical machine learning, I think is probably concentration inequality in my own opinion. So it's probably useful to just review why the concentration inequality can help us to do something like this. So I'll give a proof for this. So the proof look like-- So we're going to prove that-- so our lemma is that with high probability, G minus expectation G in operator norm is less than square root n log n up to a constant factor. And the first side is this is not exactly the type of concentration inequality we have talked about before because before, we are talking about scalars, right? So we are saying that if you have expectation of-- if some random variables of some empirical samples and the empirical average concentrates around the population on average. So here it's a little bit different because G is a matrix and the expectation of G is also a matrix. So you are doing some kind of matrix concentration to some extent. And your measure of the similarity is not just the absolute value in the difference of-- the absolute value of the difference, but it's about something like the operator norm of the differences of the matrices. However, you can actually turn this into something that we are familiar with very easily. So what you do is the following. This is still uniform convergence as you will see. That's the main idea. And why is this the case? This is because you can easily interpret operator norms as follows. So G minus expectation G, operator norm, this is equals to the max over v. Let me write it down and explain. This is just because the operator norm in a persymmetric matrix. I think the definition is if you have a symmetric matrix A, then the operator norm-- I guess there's absolute value here. The operator norm of matrix A is exactly equal to the maximum quadrative form that you can achieve by hitting it with a norm 1 vector. And once you do this, you see that this becomes a scalar now because this quantity is a scalar. And you can decompose this into max v2 norm square, and then you get v transpose Gv minus v transpose expectation Gv. And this is a sum-- So what is this? Maybe let me write down more explicitly. This is max. This is sum of vi and vj, Gij, both ie and in minus the expectation of this random variable. And now this becomes a sum of independent random variables, and this becomes the expectation of this sum of independent random variables. So now you can use the concentratoin. If you don't have the max, you can use concentration points. But this is what exactly Hoeffding inequality is for. And how do you deal with the max? Then the max, this will be the part about uniform convergence. Recall that the whole point of this uniform convergence is that if you fix the parameter-- suppose you think of me as the parameter. So the point of uniform convergence is that you can fix the parameter you can use Hoeffding inequality to prove the concentration, to prove that empirical is not very far away from population. And the challenge of uniform convergence is about how do you take the max, and here you still have a max. So I guess there are multiple ways to deal with this concentration. Of course, the easiest way is probably just invoke some existing theorem. There are some theorems in the literature as well. But if you want to do it yourself, I guess there are two ways. So one way is that you can use the Radamacher complexity machinery. The Radamacher complexity machinery. I guess it's probably a while back. We discussed this probably five weeks ago. And I think one of the techniques is that you do symmetrization. So so far, this is not a symmetrical form and you introduce some Radamacher variable and you symmetrize it, and then you can proceed with all the random. You can essentially view this as a Radamacher complexity of some function class. So with that, I think that's actually a pretty clean and nice way. I'm going to leave this. If you're interested, you can do it yourself. I believe it's not very difficult. What I will show here is that I'm going to show us even more brute force methods which use actually the first technique we introduced in our class, the brute force discretization. Recall that before we talk about Radamacher complexity, we said that in many cases, actually you can just deal with the uniform convergence for continuous function class with a very simple discretization. So what we do here is going to be just that you-- for fixed v with 2 norm 1. We can use Hoeffding inequality. It's Hoeffding inequality. So what you've got is that with probability, at most exponential minus epsilon squared over 2. I'm not expecting you to check it on the fly, but you can just basically plug in the Hoeffding inequality without any modification. It can be vjGij is close to the expectation. The probability that it deviates from expectation is at most exponential minus epsilon squared over 2. And then you take epsilon to be something like O square root n log n. So their failure probability, this means that exponential minus epsilon square over n. This is something like exponential minus O n log n. This is a pretty small failure probability. And then you take a discretization of the unit ball with granularity, something like sum of 1 over poly n. This is what we did-- it's a long time ago, I know, but I think this is what we did in lecture 3 I think. You take a very, very precise-- you use a very small granularity. But it doesn't really matter because at the end of the day, the dependency on the granularity is only logarithmic. So the size of this cover is exponential n log n. And then you can take a union bound over this discretized set. And then because your granularity is very small, it's only inverse poly so you only lose the inverse poly and inverse is smaller than any of the inequalities. So then basically eventually you've got that-- it's a union bound we got with high probability. We have this it is less than epsilon, which is chosen to be square root n log n. I'm skipping a lot of details because I think today we don't have a lot of time to complete all the materials, so I'm making it a little brief. But I think you kind of get the rough point. It will take too much time to work out details. And I kind of like this method 2. If I were to say my preference is between these methods, sometimes I like the method 2 because you can do this very quickly yourself and you know exactly where the dependency comes from. And if you do the Radamacher complexity, it will be much cleaner. You will get better constants, you'll get cleaner proofs. But sometimes it's a little bit less transparent because you have to go through this whole machinery. And why this is useful. This is useful because now we got this lemma, right? So lemma is the G and E. A case of G is only different on the order of square root n. And you can compare that with the signal. So now compare another level, which is O square root n log n versus the signal level, which is p over q times n and p minus q over 2 times n. So then this means that if p minus q is much bigger than 1 over square root of n, then I recover the vector u approximately. So we can see that you only need p and q to have some separation but not a lot of separation. And the separation depends on the size of the graph, which also makes some sense because the more vertices you see, the clearer the structure is in some sense. You have more kind of-- suppose you just these two users, everything's kind of two randoms and you could tell which one is from which community. But if you see a million users, you can use a lot of different users to crossvalidate in some sense [INAUDIBLE] we have the two communities. All right. So I guess this concludes the stochastic block model part. I guess there are some other small remarks which are not super important. So you can also actually can recover the exact community by some post-processing. So here, what I showed is that you only can recover the vector u approximately, but actually you can post-process to get the exact community and their setting conditions. I think under the conditions that I'm giving here, you can do it. And actually because this is a very precise mathematical structure here-- so there are a lot of works in the literature on this. And you can actually get even the exact constant here. So here I am writing p minus q is larger than 1 over square root n. So it's definitely very loose. You can get the precise dependencies that you need to recover and you can have the precise threshold. Below that threshold you cannot recover anything, above that threshold you can recover something, and above another threshold you can recover exactly. So all of these are in the literature if you are interested. And you can extend this to multiple blocks and so forth. OK. So this concludes with the stochastic block model. And now I'm going to move on to another kind of, in my opinion, pretty important literature, which is about clustering the worst-case graph. And still the thing is that if you do eigendecomposition, you are going to recover some approximate structures in the graph. So we are still going to use eigendecomposition, but the analysis will be different because here we don't have the stochasticity from the graph. And because you have a worst-case graph, you have to also somehow define what you mean by the hidden community, right, because before, in the stochastic graph, you start with community and you generate a graph. And now you are just scaling the graph. The graph is just some aggregates. You have to say what you are trying to recover. So let's start with that, what's our goal? So this requires us to offer definitions. So let's say given a graph G and the vertices is called E and edges is called E-- sorry, vertices is called V and edges is called E. So let's define this so-called conductance. This is actually a pretty important notion which shows up in many different areas of math. So of course it's a different form. So here it's a vertical of a graph and edges. In other cases, you can define conductance in high-dimensional space as well, which are essentially the same definition, but it could look a little bit different. So the conductance for graph-- so suppos you have a cut, let's call it S and S bar. You cut the graph into two parts, S and S bar. And the conductance of S is defined to be the following. So you have the number of edges, which are S and S bar, over the volume of S. Let's define both of this more. Clearly so E S S bar, this is the total number of edges from S to S bar. But this is an undirected graph. Maybe I should call it between S and S bar to be precise. Mathematically, this is really the sum of i over iSj and S bar Gij. If I use Gij as adjacency matrix-- I'm overusing the allocation a little bit. Both are given on the graph, and also this is matrix of the graph. And the volume of S, this is the total number of edges connecting to S. Which means that you look at how many edges satisfies that one endpoint is in S. So i needs to be in S and j can be anything. And you have Gij. So if you draw a graph, something like this-- suppose you draw a graph, like this and this and this, and you define this cut-- suppose this is S, then what is ESS bar. So ESS bar will be counting these two right edges because this is from S to S bar. And the volume of that will be counting all the edges connected to S, which means basically all the edges drawn here. All the green edges are counted. And you can see that by the definition, it's true that the volume of S is always-- so what this definition is for. This is trying to characterize how-- I guess the word conductance in the case it's kind of trying to characterize how good the cut is in some sense, like how separated S and S bar are. The smaller it is, the more separated S and S bar is. But you do have to normalize by the volume. So in some sense, the number of edges between S and S bar is already capturing how separate S and S bar are, but you normalize with the volume to make it more meaningful. I guess that's what I'm going to argue in the next. So I guess before that, let me just get some basic information. So the volume of S is bigger than the number of edges between S and S bar. That's trivial. So this means that the conductance is always less than 1. And you are trying to make the conductance as small as possible. And another thing is that the volume of S plus the volume of S bar is equal to the volume of V. This is a total of edges. So this means that if the volume of S is less than the volume of V over 2, then the volume of S is also less than the volume of S bar, and this means that the conductance of S is bigger than the conductance of S bar. So you should have a definition that somehow doesn't depend on how you name S and S bar. S and S bar is symmetric, but here the conductance of S and S bar are different, right? So that's how to remove this confusion between a symmetry, you just insist that you're always talking about-- so we will insist that we always only talk about S such that the conductance of S-- sorry, the volume of S is less than the volume of v over 2. So you're only taking a smaller part of S and use that to define the conductance of the cut. Why don't we just define conductance so that normalizes phi volume of V? Yes. So if you normalize by volume of V, first of all, the problem is that it means that you need to normalize because V is a constant. You have to normalize against something. I'm going to tell you why you have to normalize, but if you want to normalize you have to normalize something that changes as S changes. So here I'm only trying to deal with the symmetry so far. You only need kind of conductance on the smaller set. This is not that much because you don't want to cheat by saying I have a very, very large set and I only have one point in S bar. And it sounds like my conductance is very small, but actually it should measure the other side. Maybe before proceeding, answer the question why we have to normalize. So we can also define the v of G. This is the conductance of-- this is the so-called sparsest cut of G. A sparsest cut variable of G is defined to be the minimum possible conductance. But again, you require that S is the smaller side of the two cuts. So you minimize over the conductance. So first, you minimize the conductance first with the constraint that the volume of S is less than the volume of V. So basically, you just want to find a cut that has smallest conductance. Now let's talk about normalization, so why we have to normalize. I think the reason is pretty much just because if you don't normalize, then if you just minimize-- if you just look at ESS bar, it's typically minimized when S is small. So suppose you draw a graph, for example, I guess-- if you don't normalize, basically you prefer to pick a set S that itself is very small so that it doesn't connect to the other part. So for example, let's see. Suppose you have a graph like this. What I'm doing here is I have a-- suppose you have a completely connected subgraph. So do you have n over 2 nodes, n over 2 nodes. And within each of the subgraph, you have complete connection with each other. And then you have some very small number of connections between them, maybe every node is connected to all of them like this. OK. So it sounds pretty clear that you should just really-- the best cut you should get is this bar graph because within the cluster, you have full connection and across the two clusters, we have some number of-- let's say two edges per node. So it sounds pretty clear we should do this. But if you use the matrix ESS bar, then you see that some other cuts will have smaller number of edges across the thing because you can just take this to be S1 because S1 just consists one node. So then E of S1, ES1, S1 bar is basically how many edges comes from S1 to S1 bar, basically the number of edges connected to S1. This is n over 2. Let's say the good cut is S2. So ES2, S2 bar is definitely something bigger than n over 2 because you have n over 2 probably times the number of blue edges, something like two here. I'm joining basically two edges per node. So basically, it sounds like you should get S2, but if you use the unnormalized version, you would get S1. However, if you normalize, then it's a different game. So if you normalize, if you look at the conductance of S1, then this is E of S1 S1 bar over the volume of S1. This is n over 2 times n over n over 2. I think the volume on S minus n over 2. So this is 1. So if you look at phi of S2, then this is n over 2 times 2, something like this. And then you have the total number of edges connected with 2. That's actually a big number. That's probably something like n over 2 times n over 2 minus 1. This is the number of edges within the S2, and there are some edges between S2 and S2 bar, something like this. And this would be something like roughly I think 2 over n. So the conductance of S2 is much smaller than conductance of S1 if you normalize. Questions so far? OK, cool. So now we have to kind of define the goal. Because you have a worst-case graph your goal is to-- so we have said that the goal is to find approximate sparsest cut, S hat, meaning that you want S hat to satisfy that the phi of S hat is close to the sparsest possible cut, phi of G. And the approach we're going to describe is still eigendecomposition. So how do I do this? There's [AUDIO OUT] to even state what we mean exactly by eigendecomposition and what kind of results we can have. So first of all, let's di to be the volume of the known i. You take a single node, you take the volume, this di. And this is really just the degree of node i, right? The volume of the node is really the degree of the node. And lets take d to be the diagonal matrix that contains di of x entry. And let's define this. So a normalized adjacency matrix is called A bar, which is D minus 1/2, G times minus 1/2, where G is the adjacency matrix. Recall this is our notation, with a little bit of notation. So what does this really mean? This really just means that 1 over square root d1 up to 1 over square dn times G times 1 over square root d1 up to 1 over square root dn. And a diagonal matrix multiplied on the left means that you will scale all of the rows and the diagonal matrix at the right hand side multiplication means you'll scale all the columns. So basically you'll scale the columns and rows simultaneously with these numbers. If you do the [INAUDIBLE] what it really means is that the Aij, the ij of the normalized adjacency matrix is really just the adjacency matrix over square root di times square root dj. So this sounds a little complicated, but in most of the cases-- I'm only just stating this mostly for formality because sometimes sometimes the key thing can be seen by assuming the graph is regular. So in most cases, suffice it to think of G as a regular graph. A regular graph means that all the degrees are the same. So let's say suppose that G is a kappa regular graph, meaning di is equal to kappa for every i, then its adjacency matrix is really just a 1 over kappa-- normalized adjacency matrix is just 1 over kappa times Gij. So in some sense, we really didn't do much except for just changing the scaling of this. But this scaling is kind of important in the formal sense because it can make them formally very clean. But it's not fundamentally super important. So this is pretty much-- if you don't want to think about the di and djs, you pretty much can think of this simple case where you have a regular graph. And once we define a normalized adjacency matrix, you can also define the so-called Laplacian matrix, which is i minus the normalized adjacency matrix. I think you'll probably see that one of the reason why we have to normalize is that if you don't normalize, it doesn't makes sense to subtract, take the differences between it and an identity. Identity is something that doesn't have a scale. So you have to normalize it so that you can kind of take the dif with identity. And this Laplacian matrix is really not doing that much. It's not that different from normalized adjacency matrix anyway because they are-- pretty much everything corresponds to each other. So the eigenvector of L is the same as the eigenvector on A bar. And the spectrums are just flipped with each other. So let's say suppose L has eigenvalue lambda 1 up to lambda n, let's say suppose-- I think in this literature, you always want to order them. And then with the eigenvector u1 up to un. Then this means that this is equivalent to A bar as eigenvalue 1 minus lambda 1 up to 1 minus lambda n. Now I'm searching in a decreasing order and with the same eigenvectors. So you don't even have to think about the Laplacian. The Laplacian will come into play at some later places, but so far you can just think of Laplacian is a flipped version of normalized adjacency matrix. Nothing really different. So these are some little bit abstract preparations. And now let's see what we can do with this. So this is the in my opinion, pretty important theorem. It's called Cheegers inequality. Actually, this dates back to 1969 by Jeff Cheeger. So it says the following. It says that lambda 2, this is the second eigenvalue, over 2 is less than the conductance of G, which is less than square root 2 lambda 2. So why this is a very important thing, it connects the conductance, the sparsest cut to something linear algebra to the eigenvectors. So the sparsest cut is a very combinatorial stuff where if you really want to find the sparsest cut, you'll probably want to enumerate all the possible cuts and vice versa. At least the definition is a combinatorial thing. But this inequality is saying that somehow, the sparsest cut value has a lot to do with the eigenvalues of the Laplacian or the adjacency matrix. And in particular, it's very close to the second eigenvalue of the Laplacian matrix. And moreover, you can also find the-- you can find the approximate cut S hat such that this cut S hat-- the conductance is less than square root 2 lambda 2, which is less than 2 times square root phi of G computationally efficiently. And not only computationally efficiently but also actually pretty explicitly, what you can do is the following, by rounding the eigenvectors. I guess rounding really means the following. This is the rounding in the approximation algorithm. If you don't know what the term comes from, it doesn't matter. So here is the procedure to find such a set S hat. So suppose you take u2. Suppose the u2 is equal to-- the coordinates are beta 1 up to beta n. It's the second eigenvector. It's the second eigenvector. So you can take a threshold, which is T to the beta i, and consider S hat i to be all the coordinates that-- so this flexibility is less than tau. So you take the threshold but you don't have to consider all the possible thresholds. It's not necessary because I don't think so. You take a threshold tau, and a threshold is chosen from one of the coordinates. And you say you look at all the coordinates that are smaller than the threshold, and that's your S hat. So you have basically all of these sites, S1 hat, S2 hat, S3 hat, and so forth. So one of these S hat's satisfy phi Si hat is less than 2 times square root phi. So one of these sites will be a good cut. So I guess I'm thinking this in a formal way. It seems a little bit confusing. So what you really are doing is the following. So you sort, I guess in plain language or in more informal language, you first sort the coordinates. Suppose you sort the coordinates first and you get beta 1 less than beta 2, less than beta n. And then it's saying that if you take-- this will be S hat i. But this will be the first i coordinate, and that would-- and one of these hats will be a good cut. So you can try one cut, which is like this, you can try another cut, which is beta 1 beta 2, and you can try another cut, which is beta 1, beta 2 up to beta i. And one of these cuts will be a good cut of the graph with a small conductance. And of course, you have to restore the-- you have to remap the coordinates back to the original coordinate system because you have started the coordinates. But this is the manner. Any questions? Another way to think about it is that in a stochastic block model case, the second eigenvector was something like this. And pretty much in that case, if you take a threshold, the smaller values correspond to one cut and the larger values correspond to another cut. But here you don't know where the exact threshold should be. You should try all the thresholds, beta 1 up to beta n, all of them. OK, cool. So this is a pretty magical theorem in my opinion. I'm not going to prove it. If you are interested, I think there are a lot of lecture notes that can prove this. I guess what I'm going to do is I'm going to-- [INAUDIBLE] exactly one of these S hat i that says that or additional? Additional. And if you are able to-- and you can enumerate all of them. Just try all of them and see which one is better than the other. So the proof is pretty nontrivial. It's not very long, but it's kind of non-trivial. So I'm going to skip the proof and I'm going to link-- I see some questions here. So the question online here is that the hat Sj found this way is in the best possible cut, right? Yes. So you are not guaranteed to find the best possible cut. You're only guaranteed to find a cut such that the cut value phi of S hat i satisfies that it's less than 2 times square root phi of G. If you get phi of G here, suppose you magically change this to phi of G, then that means you can best cut because phi of G is the value of the best cut. Of course, maybe there are multiple best cuts as well, but you definitely find one of the best cut. However, we don't have that strong theorem. We've only shown that 2 times square root phi of G because-- so you lose something. Square root phi of G is bigger than phi of G, by the way, because phi of G is less than 1. So you'll lose some factor in terms of the best possible vector conductance. I hope that answers the question. Anyway, sometimes you have to lose a little bit to some extent because I guess this is sometimes post-mortem. But if you think about-- in retrospect, one of these quantities is very combinatorial, the sparsest cut, and the other point is very linear algebraic. it's sounds unlikely that they can be exactly the same, right? So it's already kind of fortunate that they are somewhat related in my opinion. And there are also kind of like the most discussed some of the intuitions or kind of more basic qualities. Like some of the intuition is why this can be possible to, but I won't give the full proof. The statement up there that says we can find beta S such that it's less than square root 2 over 2, which then is less than. Is that that we're actually finding then we're just saying, transitive property [INAUDIBLE].. Then that's less than 2 square root. And in that case-- I asked if we just care about the relation to phi of G, right? You just care about? The comparison of r hat to the cut of G. It's not significant but it's-- the square root 2 lambda 2 is insignificant. Other than that, it lets us share this inequality. Sure. So first of all, yes, you are right. So how do you get this inequality? This is just by using this part. So that's true. And second, yes, probably the first bit you care about is comparing with phi of G, and these are just some intermediate things. That's the first other bit. But I think if you look at the proof, you do have to-- the eigenvalues have to show up somewhere. Do you use any-- do you not use anything that came out of that second Hoeffding inequality? So maybe your point is actually pretty good because 2 lambda 2 is actually relatively small, but then you use more of that second volume. I think it's possible but we don't really know. It's kind of very hard to-- I think there are hard instances in both cases. This thing can be both close to lambda 2 over 2 or it could be very close to this side. Cool. So I guess I'll focus on some intuitions and why. The first thing I want to discuss is that I think this is again about the scaling to some extent. So first of all, the smallest eigenvector, why you take the second eigenvector. I think that's always something that seems to be magical to me at first sight. And then after I spend some time-- when I first started, I realized the top eigenvector-- kind of like say last time. The top eigenvector is kind of like a background. So either the smallest eigenvector of L or the top eigenvector of A bar. This is kind of not that interesting. And what why it's not interesting is it's pretty much only trying to get the-- only capturing in some sense I call it background. It's kind of like a background density. So what I really mean by this is that off the graph. What I really mean by this is that let's say suppose when G is kappa regular. I think we actually have stated this in a previous lecture. So L1 vector is top eigenvector of G of adjacency matrix, and thus also top eigenvector of A bar, which is just 1 kappa times G. So when G is regular, then the top eigenvector is really just about-- it's just an L1 vector. And in more general case, it really just involves the scaling based on density. So for general G, what happens is that the top eigenvector is really just this one, the square root d1 up to square root dn. The scale doesn't matter here because the eigenvectors is-- and multiplication of this is also eigenvector, so I didn't care about the scaling. So this is the top eigenvector of A bar, so smallest, which means the smallest eigenvector of Laplacian. Why this is the case, you can verify this relatively easily. So A bar times u1, this is the mentioned location. And you look at j's coordinate i's coordinate, this is equal to the sum of j over j, sum of Aij bar times uj. And Aij bar is a scaled version of the graph. So Gij over square root di square root dj. And uj is square root dj. And this is sum over j. So you first cancel these two, and you get 1 over square root di in front. You get this. And recall that this is actually a precise definition of the degree. This is the total number of edges connected to the graph. So get 1 over square root di times di into square root di. So that verifies that u1 is an eigenvector. This means A bar u1 is equal to u1. So basically as before, the top eigenvector is not doing much, it's really just capturing the degrees of the graph. And the second eigenvector starts to talk about the interconnections. It has more about the relationship between edges and hidden communities. And now let's look at some intuitions about why somehow this eigenvector is related to the cut. So here is another way to think about it. So if you look at the contracting form of the Laplacian. So what is this? This is the v transpose i times v minus v transpose A bar times v. Let's just put firstly right this. This is sum of vi squared i from 1 to n minus the sum of ij. Let's do it. vivj, A bar Aj. And this is sum of vi squared minus sum by vivj GIj square root di square root dj. And Gij is 1 when the eigen is 1. So what we got is that sum vi squared minus ij is the edge. But ij and ji are both at the source, so that's why you get 2 here. 2 times vi over square root di minus vj over square root dj. And now you can bring this first thing i, and then you take jvi-- I guess maybe let's spread this way. So I'm claiming that this is equal to sum of vi over square root vi minus vj over square root vj squared. And ij is E. And why this is true, this is true-- you can expand this equation into terms. And you can see the cross will match this one, the only thing is to see the other terms match the vi's, right? So we can verify that by looking at some ij in E, vi squared over di. This is on sum over i, sum over j, such that i, j, and e-- I think this is probably obvious, but I'm making it a little bit too complicated. So vi squared over di times this one. If you sum over i first and sum over j second, first sum over j and then sum over i. So how many edges are connected to i, that's basically di. So you guys this vi squared over ti times di. So that's why it's sum of vi squared. OK, sounds good. I guess I'm somehow missing a constant somewhere. I'm not sure what it is. Did I miss a constant somewhere? I will double check, I think the constant might be off by 2 somewhere, but you get it just this way. OK. And if this is regular graph, if G is regular graph, say kappa regular, then you can ignore the is. You can just say v transpose Lv is 1 over kappa times ij in E. vi must be j square. OK. So but why I care about the-- why I did so much work to guide this equation. I think the equation is very important because this is how it links to-- how these algebraic quantities links to the conductance. So this is the algebraic quantity. It's something like linear algebra. It's quadratic form. However, if you-- here, suppose now you you restrict it? You restrict v to be binary. So suppose v is-- we take v to be binary. It's a binary vector. And you would take s to be the support of v. So the indices where the entry, the v is 1. Then you can see that from this formula, v transpose L. V is 1 over kappa times E vi minus vj square. And when is this 1? This is 1 when i and j are in both-- are in different. So this is only 1 if i in s, and j is in s bar, or i is in s bar, j is in s, right? So basically, this sum is the number of i and j's between s and s bar, because only then the i and j is an average across the groups. This vi minus vj square is v equals to 1. Otherwise, it's going to be 0. So this is why it's 1 over kappa times the number of ij's across N minus s bar. So the quadratic form connects to the number of ij's across the two groups when v is not binary. If v is not binary, of course, it's not true. But if it is binary, it's true. Or in other words, you can write v transpose lv is 1 over kappa, the support of v-- a support. So and now suppose if the support of v, the size is less than n over 2. So you have the volume is less than. So this means that the volume of this s is less than the volume of v over 2. Because this is a regular graph the volume, is really just the size of the set. And then in this case, v transpose Lv over this ratio, v transpose Lv over norm of v square-- this becomes 1 over kappa times the number of edges between s and s bar. And what is the volume of-- what is the v norm square? This v norm square is really the size. This is just equal to the size of s. The size of s is really just the volume of s over kappa. The volume is the number of edges connected to s and its regular graph. That's why the volume is just kappa times the size of s. So then you cancel the kappa and you get E of s, s bar over the volume of s. So this is the conductance of s. So basically, the conductance of s can be written as this, this form. And this form is some kind of linear algebraic form. And I think this is v transpose Lv over norm of v. This is called Rayleigh quotient. This is named Rayleigh quotient. And the point here is that this Rayleigh quotient connects to the conductance. But of course, it's not exact, because it requires when v is binary, right? So if you do eigenvectors, you are trying to-- so eigenvectors means you are minimizing Rayleigh quotient without any constraints, right? Constraints on v, right? But the minimal cut is the sparsest cut. Basically means minimize Rayleigh quotient with the binary constraint. And in some sense, this [INAUDIBLE] the core is really nice even without a constraint. With a constraint, without a constraint, you don't really differ by that much. So actually, the proof works out like something like you first try to find eigenvectors. And then somehow you get some real number of v's. And then the eigenvectors have real numbers right, in d. And then you round. You round it into binary vectors. And then you say by rounding it, you don't lose too much of the Rayleigh quotient. And that's how the proof, roughly speaking, works. So I guess that's the intuition. And all of this can be extended to a weighted graph. The intuition is the same for weighted graph or for non-- for graph that are not similar, not regular, and also for graph that are weighted. So here, the graph are just binary, like 0, 1. There's no one the other ways. You can also do it for weighted graph. So, great. So I think-- I hope that I've convinced you that eigenvectors are very related to the graph clustering by these two examples, stochastic block model and this worst-case situation. And this kind of algorithm has been used. So if you do this on spectral clustering, this is-- and you can actually use this-- OK, how do I say this? So the materials I presented, this mostly come from the theoretical computer science community. And there it that doesn't have much to do with machine learning, right? So what the people care about is that you just want to partition a graph into two clusters. So you're going to have to go-- machine learning to these kind of problems and study this. And I think there is a so-called spectral clustering approach-- spectral clustering. This was bring to machine-learning community I think around 2000, I think by-- I guess I said this paper by Shi and Malik and Ng, Jordan, Weiss. This one's 2000. And the way that you do it is that you find a graph from the machine-learning data, and then you apply this algorithm. So basically, this brings us to question on how to choose this graph, so how to choose or design the graph, because the graph-- in TCS, the graph was given to you, maybe some graph that somebody give you. But machine learning, you have to somehow choose your graph, right? So in Andrew's paper, the definition of the graph is something like this. So you first say you're given some raw data, say x1 up to x10. So these are in between data points. And then you define a graph G to be something like Gi and j. This is a weighted graph. Well, I didn't really discuss the weighted graph, but there's a natural extension to weighted graph. And in the weighted graph, the weights between i and j is something like exponential minus xi minus xj 2 norm over 2 sigma squared. I guess this is probably is something that is very familiar to you. This is just the RBF kernel, the Gaussian kernel. So you define this with some training parameters. Or you can have some other variables, right? So I guess you define a graph based on some distances between your examples. And then what we do is we say you do this. You get a spectral cluster. You run a-- you get the eigenvectors. So the first time, you define a graph G and get eigenvectors of the Laplacian or normalized adjacency matrix. And here is not only two clusters. You can do multiple clusters. And when you do multiple clusters, what you do is you say you get eigenvectors, say, u1, u2, up to uk. Suppose you want to have k cluster. And this is a matrix of dimension R of n by k. So each column is an eigenvector. And you have three of these eigenvectors. And now what you do is you say you take the rows as the embeddings. Or in the modern word, you have representation, because I probably-- some of you heard of representation learning, and for the full-- the ith example. So basically, you can-- so for every example xi, now it becomes represented as-- maybe let's call them 0 vi, vi, which is the dimension k. And k is-- k corresponds to how many eigenvectors you take. And then you've got these low dimensional representations. Maybe it'll tell you something up to all three. And then in the original paper of Andrew's paper, I think you do some kind of other-- another k-mean cluster and some other clusters, so k-means. I'm not-- I guess probably you've heard of k-means-- k-means on the representations to vn, and to cluster them again. So this is the so-called spectral clustering algorithm. And there were actually later-- I think around 2014, 2013, there were a few papers to analyze this and show that you can actually get reasonable representations and clusters by using this approach. Any questions? So what are the-- or what's the issue with this? The issue with this is that the graph G could be not very meaningful. So in high dimension, so all the data points are very far away from each other. All the training data points-- I should be precise-- far away from each other. And the Euclidean distance becomes-- the Euclidean distance between these training data points becomes pretty much meaningless, because in particular, Euclidean system of cat and dog versus the Euclidean difference between dog and dog, you probably wouldn't see much differences, because two dogs could still have very big Euclidean distance-- two random dogs. And I think this is the-- is sometimes the problem with [INAUDIBLE] because the graph itself is not meaningful. So you need to find the sparsest cut for the graph. If the graph itself is not very useful, even finding the sparsest cut is not that important. It's not that useful for you. So that's why the theory, the analysis for this spectral clustering algorithm, doesn't really deliver that much, because it didn't really consider how the graph was generated. All of this theory says that if you're given good graph, you can find the sparsest cut for this graph using this approach. But it doesn't really say anything about how the graph is generated. So I think for the last 15 minutes, I'm going to discuss, briefly discuss, one of the-- This is one of the work in my group recently, so where we try to re-use this classic idea, but use it for-- in a different way. So this is in thiks paper by Haochen et al in my group. So what we are trying to do is that we say you consider infinite graph. So G, v, w. So this is the vertices. This is the weights on the edge, and where we take v to be all the possible inputs. So this is all possible data, data points. So this graph would depend on the population space, right? So actually, it's the best fit space of all possible, let's say, images. And your graph is defined on-- each image corresponds to a vertex. So before, the graph has size little n, where-- oh, sorry. This is E. So before, the graph has size little n, right? It's a little n by little n matrix. And now the graph has a much bigger size. The size is the same as the commonality of all possible data points which could be infinity. So it's possibility, maybe, let's say-- possibility, let's say, you have to find the number of possible images that then could be exponential. So firstly, let's say we have exponential size graph. So on this graph, what you do is you define w, x, x prime. The weight is split in two nodes, two vertices. Let's say we find this to be large only when x and x prime are close-- are close in L2 distance. So I'm still using L2 distance. I'm still probably using-- I didn't-- I'm not specifying exactly what's the definition here, because I think that requires too much trouble, which I cannot fit in 10 minutes. But still, we are using-- pretty much you can think of this as almost the same as the previous definition of the graph, where x and x2 prime are close. But I guess the point is that this is very close. So before, you have to choose the signal very subtly, because all the points are very far away from each other. But now you say that I don't have all those points that far away from each other. I just care about those two points that are very close to each other, right? So suppose you have two dogs, running dogs. You say they are not close. If you only have one dog, and then you have a perturbation of that same dog, you say there are two. They are dogs that are connected to each other. So then this graph becomes more meaningful because you only connect very nearby cats and dogs or very nearby images. And then, so the graph becomes more meaningful, so the pros is that the graph is more meaningful. I guess the cons is that it becomes infinite dimensional. And you don't have this graph because you don't know all the possible data points. You only have some sample data points, so infinite or exponential expansion dimension. And you don't have access to this graph. So what we do is the following. So the way we fix the columns is the following. And also maybe another way-- cons-- is that even the eigenvector itself, right, the eigenvector is also high-dimensional, right? It's infinite dimensional because the eigenvector-- the dimension of the eigenvector is the same as the dimension of the graph. So over here what we are doing is that we use the new ideas, the different ideas, you know, to kind of-- actually, the real research is the reverse direction. We somehow try to explain the different ideas. But here in this context, you can think of this as you use the parametric network of ideas to try to deal with these cons. So what you do is you say, suppose you have an eigenvector nu. This is an eigenvector. So this is the eigenvector nu. And here, the eigenvector is a high-dimensional vector. So you can say this is nu x, where it's indexed by all the possible data points in the capital X, right? This is of dimension something like maybe R to the capital N or R to infinity, depending on how many vertices are in your set. And you don't even have space to save all of this, yeah? Even if it's a single vector you don't have any space to save it. But what you do is you say you represent this u, u sub x by a neural network applied on the raw data point x, so where f theta is a parameterized model. So if you do this, then at least you can describe the eigenvector by theta. Now you don't have to specify all the capital N numbers to specify the eigenvector. You only have to specify the theta to describe this eigenvector. Of course, if you believe that f theta is powerful enough, then you can express eigenvectors. But at first, obviously the problem wouldn't be enough. So you have to make some assumption that neural networks can represent these kind of eigenvectors. But suppose under that assumption, then you can at least represent the eigenvectors by theta. And now basically the question becomes, or so the question changes to you want to find theta such that this vector f theta x, this very high-dimensional vector, is an eigenvector of the graph G. So at least you are trying to find a low-dimensional-- you are trying to find a parameter theta. You are not trying to find the-- a high-dimensional vector anymore. And it turns out that if you do this, I guess maybe eigenvector, Laplacian. Let's see. I think I have time to-- it turns out that if you do this, then this is basically trying to do the-- this gives a-- you can use an algorithm to try to achieve this. I guess I'm trying to-- let me see whether I have time to-- I guess what we can do is the following. So what you do is you say, I'm going to-- how do I find the eigenvector of L like this? So suppose I have the access to the whole graph, which I don't. But suppose I have it. What I can do is I can minimize the following thing. So I can say I'm going to minimize F. Let me write it down-- I L n. So maybe that's what it is. [INAUDIBLE] isn't given by images. So first of all, I claim that this gives the top eigenvector of A bar. This is because-- this is something that I probably wouldn't have time to explain that much. But if you want to fit a low graph matrix [INAUDIBLE] the top K eigenvector. So if you want to fit a low rank matrix to the matrix A bar, the best fit would be to use eigenvectors of A bar. You can invoke a theorem to show this. Basically, F is going to be some version of-- the minimizer of this will be some version of the eigenvector. So I think F will be-- the minimizer of this will be some scaling of the eigenvectors. And then if you use this objective, then you can replace the capital F, which is non-parametric-- it's a very big matrix-- by-- so you can say that the capital F not-- for now is you-- supposed to be something like this, right? And then you write it out as this. You write it as f theta x transpose, maybe x1 up to x theta, x n transpose. So you replace the row by the paramaterized version. So you say that every row now is a network of the raw data. And then what you can get is that this will be-- if you write this as-- in this version, this is like a-- first of all, you write the real sum as the sum over ij in N, A bar ij minus-- so FF transpose i and j. The ijth entry actually is the ith row in a product with the jth row. So that's why this is equal to f theta xi times f theta xj square. And now I can change this to-- instead of minimizing F, now you are minimizing theta. And I guess I don't have time to go through all the details. This is basically-- this is a now objective function that you can optimize. Of course, the problem is that you still have this sum, this big sum. You can replace this by the empirical version. So you can get minimize over theta. You can take some random samples. So you can take-- so I'm not sure whether I have a way to simply write this. OK, maybe I'll just say you can sub-sample this. Sub-- so estimate this using an estimate, using an empirical estimate, using empirical examples. And actually, it turns out that you can simplify this formula. This will be something similar to the contrastive learning algorithm that is used in practice. I guess this part, I don't really have time to show. I guess I will refer you to the paper. I think probably I should just stop here. Are there any questions first? I know this part is a little vague. Feel free to ask any questions. Do you know of any contrastive learning paper we could look at off the top of your head? The paper I think probably is good to write just a-- [INAUDIBLE] Yeah. I think the loss is not exactly the contrastive learning loss used in practice. So we're going to have something we call spectral contrast enhanced, so which-- so basically, actually this-- that-- if we have all this at top then this stuff is pretty trivial. So eventually, you could simplify this a little bit. You could write this, that you got one term which is minus 1/2 of theta xi of theta xj. And this is something that the-- the term that tries to make two exemplars closer to each other. And there's another term that tries to contrast them. But there's another-- so anyway, so I guess I'll probably just refer you to the paper, of our paper. I think the title of the paper is that, somewhat "Provable Self-Supervised Learning Via Contrast-- Spectral Contrastive Loss." Spectral contrastive loss-- something like this. I think from this, you can search the fun title. So before-- the session just before this one, you mentioned looking up-- that you can look at the eigenvectors, line them up, and then complete the first row of that matrix. And say that corresponds to the first data points. What exactly is the worst one? Is it-- how come the first and second row are similar? Then the first two data points should be similar. I think I got the question. So I think there is something that I kind of like-- I understand why there is a lot of confusion, because I skipped something about the k, how do you deal with k clusters. But I think this could be seen where you have only-- if you take a little leap of faith between k clusters and two clusters, you're just gonna say this two cluster. And then if you look at the-- let's see. So where did we discuss this? I think we discussed this somewhat implicit in-- several times. So I guess, for example, suppose you go back to here. And recall that the second argument beta 1 after beta n. Right? And we discussed that you take a threshold. And then you can separate the two groups with threshold. So in this case, basically suppose you have two clusters. And in this case, basically the beta i is your representation of the ith vertex. So that's the row right there, and beta y's the first row, right? Beta 2 is the second row, right? So beta i is the representation of the ith vertex. And why beta i is better than the row data? I think this is because at least with a threshold of beta i, you get the groups right. So basically, in some sense, beta i-- in some sense, maybe the ideal thing is as follows. So suppose you in the stochastic block model get this. And then I guess you probably can agree that these numbers are better representations than our original data, because now, you make all the vertices in the same group to 1. You lost all the other information. You just get-- the representation just exactly tells you about a group membership. And you don't know anything about else, right? So the group membership is the only thing you care about. So that's why these numbers are more-- better representations than the-- Is it similar to the low rank matrix approximation, approximate a low rank matrix and that's just a better representation, because you've taken the most important parts of the representation? Exactly. And what's the most important one? The most important one here-- in this case, the most important one is the clustering structure, so which group you belong to, right? So that's why the only thing you care-- suppose you think that's the most important information, that your representation should just be that. You ignore any other information. You just say the group ID is my representation. And that's the best representation. But that's only the case in this case, where we said there's two clusters. Right. [AUDIO OUT] Then we want to-- we care about the 2-cluster kind of representation, but we also care about maybe like a 3-cluster representation and how close things are based on that. And so by taking multiple eigenvectors, we can get a bigger picture just-- in this cluster, we're not. Exactly, exactly. So if you have more eigenvectors, then you can get 3-cluster information or even more information. And also, this can be-- some of this information can be recombined to get even richer information, right? So because eventually, you'll probably used this representation by a linear plus-- use a linear hat on top of it. fit. So suppose you have two type of information in your system. Then you'd have to combine them to get more information. Yes. But you are right. So basically, you can get more eigenvectors. You get more richer information from the graph. Yeah, so essentially, you are-- it's kind of like making experimentation. You distill the information in a graph to smaller amount of information. And the question we are trying to answer is then what information you keep in the eigenvectors. So it's not that surprising that the eigenvectors has more specific information about the graph. The question is, what can we glean? And the graph intuition is that it does keep the clustering structure in the graph, but not other things. I used the low-- the smallest eigenvectors are trying to keep the class and structure of the graph. OK. Great. I think this will be the end of the quarter, I guess. I hope you liked the course. I guess we discussed quite a bunch of topics. Actually, this quarter I think we covered the most compared to all the previous quarters, because-- partly because we have more than-- we have 10 minutes every class and every lecture. And also, we have two more-- well, two more lectures, because we have fewer holidays in this quarter. Yeah, I guess I hope you like it. Thanks. Thanks so much for attending. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_18_Unsupervised_learning_mixture_of_Gaussians_moment_methods.txt | OK. So I guess let's get started. So today this lecture, we are going to discuss a few small stuff that are remained-- that are kind of left from previous lectures, and then we're going to move on to unsupervised learning. So I guess the first thing is recall that last time, we talked about implicit regularization of the noise. And we mentioned that in certain cases, you can prove that a regulizer prefers the noise, noisy GD. Noisy GD prefers smaller of this quantity, R theta, which is defined to be something like the trace of the Hessian. And in the first part of this lecture, I'm going to spend probably 10 to 15 minutes to briefly discuss why this is a reasonable thing to try to minimize, or try to kind of regularize. So why the trace of the Hessian is some meaningful quantity, or-- but this part wouldn't be exactly kind of rigorous, because you have to do some approximations and so forth. But I'm just going to do some kind of a somewhat heuristic derivation to justify why something like the Hessian would be useful for us to regularize. So I guess the thing is that how do we-- what is the Hessian? What is the Hessian? Maybe, actually, I should write l hat. This is the empirical-- the Hessian on the empirical loss. So maybe for simplicity, let's only consider the one data point. And let's say, suppose f-- let's denote f to be f comma theta. This is the-- I guess maybe I should say f theta of x be the model output. And let lfy be the loss function. Then what you can do is that you can compute. So then l theta, in this case, is just l of f theta xy. So in this case, then we can compute what the Hessian is. So the Hessian-- maybe let's call it l hat just to be consistent in terms of notation. The Hessian is the gradient of the gradient. Now what's the gradient? If you use Chain rule, what you got is that you get the partial l or partial f times partial f over partial theta. So this is number. The directive is back to-- so this is a scalar function. l is a function of f, but this is a very simple function because l is a scalar, f is a scalar. So this is a scalar and times the gradient of f theta at x. And now you are taking a gradient of a product of two quantities. One is a scalar and the other is a gradient. And then you can do chain rule. What you got is that you first, for example, do the gradient with respect to your first part. What you get is-- let me see what you got from the first one, is that you have l, the second order derivative with respect to f. And then you do the chain rule, you get this f theta x, this is with respect with theta, times gradient f theta x transposed. And this is in some sense the-- so this part is the gradient of this and this part is copying. I guess this is copying from here in some sense. But I guess this is something that you can verify offline, if you do all the-- if you look at all the coordinates and do all the calculations. And then you can also do the chain rule for the other part. So what you can get is the bl over df times the second order derivative of the model theta at x. The matrix of dimension p by p, if p is the number of parameters. And this is a scalar, this is a scalar. And this is a vector, this is a vector. So the whole thing is a p by p matrix. So we'll have-- suppose your loss-- so this is a general formula, which is just rigorously true. And suppose the loss function is lf y is equal to, for example, 1/2 times y minus f squared, then this formula becomes-- what is the second order derivative of this loss function with respect to f? Its function is a quadratic function of f. The loss function is a quadratic function with respect to f, and the leading term is f squared. So the loss-- the second order derivative with respect to f is 1. So this is equal to 1 times gradient theta-- of f times the gradient of f theta transposed. And then this first order derivative will be-- respect to f will be f minus y times the Hessian of theta x. So this decomposition is often called-- so what you can see is that this is a convex. Term. This is a PSD term, sorry. This is PSD because it is the other product of a rank one matrix, and this is non-active. And this is not necessarily PSD. So the Hessian may not be PSD in general, of course, right, because you have a non-convex function, but one of the terms is PSD. And in general, if you have a convex loss function in the first term, in general, this is called-- this is called-- I don't know why this is called this, but it's called Gauss-Newton decomposition. I think it must have something to do with these two famous people at some point. But it's called Gauss-Newton decomposition. And in general, the first term, this is always positive for convex loss function. By loss function, I really mean literally the top, the either quadratic loss, or cross-entropy loss. They're all convex, right? So this term, in almost all cases where we study the first term, this is non-zero. So the first term is-- this is PSD. PSD. And empirically, people found that the second term in most of the cases is small. So there could be multiple reasons for this. So empirically, the second term, f minus y, this is generally smaller. And while the reason could be that at least when you are at a global minimum, this term is 0. So when theta is at a global min, meaning f theta x is equal to 1, right? So global min would fit the data exactly. So in this case, then this term is actually literally 0. So f minus y 0. So this could be one reason why empirically, the second term is relatively small. Of course, this is not always true. It's not always true you can fill data at any point. But somehow, people found that the second term is somewhat smaller than first term. So if you don't care about anything super-- if you don't care about a very nuanced quantities, about a Hessian, then the first term is the reasonable approximation for the Hessian. Of course, in certain cases you do care about nuances. For example, when you care about whether this function is convex or not, you should talk about-- even any non-negative eigenvalue would make it non-convex. So then the second term becomes important. But if you just have a choice, then the second term is not that important. So that's the rough intuition. And now suppose ignoring the second term. this is a big assumption, but suppose ignoring the second term. Then we can see what's the trace of the Hessian. Second term. So the trace of the Hessian for whatever it is. For example, you can ignore it just because it's empirically small, or you can ignore it because you are at a global minimum. But suppose you ignore the second term. Then the trace of the Hessian, f theta is approximately equal to this derivative is equal to y, which is probably 1 if you have square loss, times the trace of this transpose which is equal to some scalar times the true norm of the gradient. So you can see that by minimizing the trace of the Hessian, you are minimizing the l2 norm of the Lipschitzness with respect to the parameter. So minimizing the trace of the Hessian, is somewhat similar heuristically, minimizing Lipschitzness of the model with respectful to theta. And I think-- so why is minimizing the Lipschitzness of the model with respect to theta is useful? Actually, first of all, this is indeed useful. If you just expressly minimize this, people have found that empirically this is useful. And why this is useful? If you allow some heuristics, you can also say that this is very similar to minimizing the Lipschitzness of the model output with respect to the hidden variables. I think this is something that we discussed probably a few weeks ago when we talked about the all-layer margins. So recall that if you have an original theta, which consists of, for example, a bunch of layers, suppose you have a deep network with a lot of weights, and then the derivative of the model with respect to some layer i, this is equal to the derivative of the model with respect to the layer above it times the layer mc. So this is hi plus 1 times hi minus 1 transposed. So this is the so-called-- OK, I guess now I remember the it's called Hebbian rule. But it's actually just literally a simple chain rule. In a narrow sense, this is called Hebbian rule. But technically, it's just really a chain rule. You want to take the derivative with respect to a parameter, and the parameter kind of comes into play that depends on hn as 1 and hi plus 1. This is hi. So here, hi plus 1 is w times hi. So this is the i-th layer and this is the pre-activation of a plus 1 layer. I guess maybe technically I should call this hi prime just so that I can distinguish it from post-calibration. But I guess you get the point. The point is that if you take the derivative with the respective parameter, it's actually very closely related to the derivative, with respect to a hidden variable, and a norm of the hidden variable, hi, all right? So this means that Euclidean norm. If you read the matrix and it's a Frobenius norm of this is equal to true norm times hi. True norm. So minimizing the Lipschitzness with respect to the parameters is similar to minimizing the Lipschitzness with respect to hidden variable. I think this is something we have discussed before when we do the all-layer margin, right? So then we talk about the derivative of the hidden variable, then this is kind of like all-layer margin. I guess you are maximizing all-layer margin, because all-layer margin is bigger if you have more Lipschitz model. You have a larger all-layer margin. So none of these steps can be made 100% rigorous. Some of the intermediate equations that I've written are exactly true. But I don't think all of these steps can be made completely rigorous. But sometimes, this is probably the nature of the networks where you cannot be 100% precise just because things don't match exactly. But I think the intuition is really just that the Hessian relates to the Lipschitzness of the model with respect to the parameter, and the Lipschitzness of the model with respect to parameter relates to the Lipschitzness of the model with hidden variables, which is kind of like captured all-layer margin. Any questions? OK. So this is the first thing about-- this is the remaining steps, the remaining remarks from the last lecture about the implicit regularization of the noise. And there's another thing I want to discuss, which is something I-- sometimes it's my omission. I forgot to provide a proof for one of the theorems that we discussed I think two weeks ago about the implicit recognition effect in the classification case. I think there, we only-- basically, at the end of the lecture, we only were able to kind of provide a theorem and the basic intuition. But we weren't able to really show the proof. The proof is very simple and short, just one page. I think it's a very nice proof. So I really want to show it to you. So maybe let's discuss that in the next part. So I'll remind you what the theorem was about. So guys, just this is I think two lectures ago. So two lectures ago, we showed the following theorem. The theorem was something like suppose maybe the context is that we have linear model classification and we have a gradient flow. We have infinitesimal learning rate, and want to understand what's the implicit bias of the algorithm in this case. And the theorem that we had was that gradient flow converges to the direction of the max margin solution in the sense that so the margin of your intuition is converging to the max margin solution as t goes to infinity. So here, wt is iterate of gradient descent at time t of gradient flow at time t. And gamma is the normalized margin. And gamma bar is the max normalized margin. I think at the end of the lecture, I think I discussed the intuition. The main intuition is that the cross entropy loss is really-- basically, you can do an approximation. And in certain cases, the cross entropy loss is an approximation of the max margin. So the main intuition, if you recall that lecture-- I will just very, very briefly summarize this. So the main intuition is that if you do a bunch of heuristic calculations, you can find out that the log of the loss is approximately equal to-- let's see. So the log of loss is approximately equals to minus times the norm of w times the gamma of w. So basically, minimizing the loss is kind of either you want to make the norm of w bigger, or you want to make the margin bigger. So we did this very heuristic simplification. Give you this. So that's why if you want to minimize the loss, in some sense, you are either trying to make the norm bigger, or you are trying to make the margin bigger. And it turns out that you can actually control both of these two forces, these two kind of tendencies. And if it's actually true that the norm is growing and the margin is also growing, both of them are trying to be big. And you can show that the norm grows to infinity and the margin grows to the moderate-- the largest margin. So that's the thing we're going to prove in this theorem. Any questions so far? And then one of the key things that we discussed at that point was that the log sum exponential and one of the key techniques is that log sum exponential is kind of like the same as max if your input has a large scale. So today, I'm going to provide a formal proof for this theorem, which is actually pretty-- in my opinion, it's very elegant and simple. And we only prove it for-- prove for only the case when the loss function is minus exponential t, the exponential loss. Recall that in that lecture, we also discussed that the logistic loss, even though it's called logistic loss, is actually very close to exponential loss. So we only deal with exponential loss, which is almost the same as the logistical. So the main feature is that as t goes to infinity, the loss goes to 0. So the t is supposed to be the margin, and when the margin is very, very big, your loss is very small. And the idea is that we can consider the smooth margin. So the smooth margin is defined to be-- I think in the lecture, we defined the smooth margin to be-- let me find out the-- OK, so I'm looking at-- So the smooth margin is defined to be the following. So consider the smooth margin. So the smooth margin is defined to be the log of the empirical loss over the true norm of w. So recall that we have tried to-- I'm sorry, minus log. So we have established this equation last time during the intuition, and that actually motivates the use of the smooth margin. You can see that the smooth margin is basically supposed to be approximately equals to the margin, gamma of w, if this approximation is true. But it's not exactly equal to that just because this is only approximately equals 2. So that's why we work with this smoother version, which is, in some sense, almost the same as the margin, but just more kind of closer to the loss function, l hat. And if you work with the smooth margin, you can show that the smooth margin is actually-- I guess we have proved this in the last lecture. So the margin is actually bigger than the smooth margin. So I guess maybe let's just write out exactly what this is. This is minus log sum of n exponential minus yi times w transposed xi the norm of w. And you can show that the margin is larger than a smooth margin. It's because we can replace each of these terms by the maximum-- by the minimum, right? This is just because yi w transposed xi is less than gamma w times the norm of w. Sorry, this is not n. So the margin is supposed to be something close to the margin, but smaller. So that's why it suffices to show that the smooth margin, gamma to the w, converges to gamma bar. wt converges to gamma bar. This is because you have the sandwich thing. So you know that gamma w is always then gamma bar. So basically it's the smooth margin is sandwiched between-- so if the smooth margin converges to gamma bar, then gamma w has to converge to gamma bar, because there is no way for gamma w to go beyond gamma bar. So now basically this is what we're going to do. We're going to prove that even the smaller value, the smooth margin, is going to converge to gamma bar. So then the larger value will also converge to gamma bar. And the proof is actually also pretty simple. So we basically show that-- we'll show a gradient flow will increase this quantity, the log of the wt, log of the log-- the log loss, because it decreases intuitively, because it decreases l hat wt. So let's do this formally, concretely. Because I think-- no. The statement itself, it increases the minus of the log loss. That's kind of like almost obvious because the loss itself is going to increase. But how much it increases requires some kind of mathematical derivation. So concretely, recall that the change in w is minus gradient lwt. This is the definition of the gradient flow. Then what you have is that the derivative with respect to t of the change-- the changes in the log minus the log loss is equal to-- so how does this change? You take the chain rule, right? So you first look at how does the loss depend on w. And then you look at how does w change, and how does the loss depends on w? So does the derivative-- so then this is a derivative of this, which is minus-- you use the chain rule again. So you get l of wt is the chain rule for the log. And above, you get a gradient of l hat wt and then fw dot t. And recall w dot t is really the gradient of the loss function. So basically, up to a sign. So basically, you get the gradient of the loss function 2 norm squared over l of wt. And this is bigger than 0. So this shows that this minus log loss is going to increase as t goes to infinity. But the important thing is how fast it increases is this quantity. This is something we're going to use. This whole thing is increasing is not surprising because the loss is decreasing. But we also want to know how fast this is increasing. And by the way, you can actually-- I think it's useful to use this, because we're going to compare it with. You can also write this as this, equals to this, just because the nabla l hat is just equal to w dot t, OK? So now with this, what we can do is that we can control what, eventually, after t step, what happens with the log loss. So what you get is that minus log l hat wt is equal to minus log l hat w0 plus the integral between 0 and t of the derivative of this quantity. And this is going to be using the equation above. You got that this log w 0 plus the integral of w dot t 2 norm squared over lwt dt. OK? So we basically now know how fast-- how large is the log loss, right? So recall that what we care about is this quantity. What we care about is-- what we care about is this and how does this goes to-- how does this go to gamma bar as t goes to infinity? And we have already dealt with the numerator, and we just have to-- we know how does this-- we somewhat know how does this change for the numerator. And we have to-- again, another thing is that we have to try to understand the denominator, right? So the denominator, you have to normalize this by the norm of w. So basically, next thing is that we're going to go with this term and compare it with the normalizer norm of w. So what you do is that you look at the w dot t squared. This is bigger than w dot t times w star. Recall w star is the direction of max margin solution. This is just by Cauchy-Schwarz, right? So the inner product of two vectors is less than the norm of one vector times the norm of the other vector. And the norm of the double star is assumed to be 1. So then we plug in the definition of the w dot minus gradient l wt w star. And then we plug in the true definition of the nabla l. So we plug in the derivation for the nabla l. So this equals to yi times the exponential minus yi xw transposed xi times xi and times w star. And then this is a vector, this is a scalar, this is a scalar. So basically, you can just take any part of these two and matched by the scalar. So this will be equal to-- I guess there's no minus here because there's another minus in the gradient, which cancels. So then this is equal to sum of yi times exponential minus yi w transposed xi times w star times xi. I guess maybe let's write this w star transposed xi. And this, we can see that this is the margin of the max margin because w star is the max margin solution. So this is always bigger than the max margin. So this is larger than gamma bar times-- I guess let me finish-- let me explain, because yi w star transposed xi is bigger than gamma bar. This is just because gamma bar is the margin of w star. So that's why every data point has a bigger margin than the margin of the data set. Gamma bar is essentially minimum over all data sets, right? And then this is equal to gamma bar times the loss. So with this, then we can proceed by dealing with-- we can proceed by dealing with this term to further lower bounds how-- control how fast you grow. So with this, you get log l hat wt is large than minus log l hat w0 plus-- so maybe just one more before I use. This let me just try to interpret what this is really doing. So this-- let's see. So in some sense, as a remark, what this is really doing is that so in wt, we show that wt-- so we show that wt is correlated with w star. That's what we are showing. So we are showing that the w dot t times w star is bigger than a non-negative quantity. Now how correlated is this depends on-- and the correlation depends on gamma bar and the loss. So in some sense, the-- and because you are correlated with the w star, it means that you cannot-- w dot t itself cannot be too small. And so this is another thing we got right. So w dot t is not too small, at least compared to the loss. So what is he saying is that if the loss is not too small, then you have to make some changes in your w. And if you have to make some changes in your w, then you have to make some changes in the log of the l hat wt. So basically, if the loss is not small, then your log of the loss needs to increase. The minus log of the loss needs to increase. So it's a little counterintuitive in some sense, but I guess-- so what we do next is that this control this additional terms that are circled here. So this term, if you use the equation we got, we got this is larger than gamma bar times you cancel out one of the law-- you can use this for one of the-- there is a power of 2 here. You can use the equation-- maybe let's get this equation one for one of these occurrences of w dot t. So then you get-- you are left with one, and then you got the gamma bar and I hat. l hat got canceled with the denominator and gamma bar is put in the front, so we get this. So basically, I'm applying equation one for one of the w do t true norm. And then you can use a triangle inequality-- this is by one-- and use the triangle inequality to say that this is larger than the integral of w dot t true norm squared dt-- get rid of that. This is replacing the integral with the norm, and got gamma bar times a norm of wt. So I guess next, you're going to see why we care about all of this. Because we care about this because now you can control how fast l hat, this log loss is improving compared to how fast the norm of w is improving. And this is what we really care about because fundamentally, we care about the ratio between them. This is the definition of the soft margin, or the smooth margin. So this means that the ratio is getting closer to gamma bar. So this term is a constant and this term is something that becomes closer to 0 as t goes to infinity. So wt goes to infinity as t goes to 0-- as t goes to infinity. That's why this term here converge to 0 as t goes to infinity. So that's why, if you take the limit, when t goes to infinity, we got this smooth margin. So we call that this ratio is the smooth margin is converging to gamma bar. So in other words, the limit t to infinity gamma tilde wt is equal to gamma bar. Maybe here you only get negative, and then you use the other way to show that. And you also know that-- OK, so we also know gamma bar is larger than the margin of any w because gamma bar is the max margin, which is larger than wt. And then you can show that the limit is actually equal to gamma bar exactly. So we're good. Any questions? OK. So I guess with this, we basically concluded our section about implicit regularization. So I guess just to very quickly briefly wrap up, so this is the end of the section about implicit regularization, and we have talked about a bunch of things like initialization. So a small initialization prefers a certain kind of solution, typically a small norm solution-- prefers small norm solution. And we are-- actually, in one of the cases, we also show that you can have interpolation between small initialization and large initialization. So in that case, you can show the implicit bias for any initialization. And we also talk about the classification problem, so where you got the max margin. So this is where you get the max margin solution. And we also talk about a lot the noise. So in all these cases, it's kind of like you have something in your optimizer that is only designed for optimizing faster in some sense, but somehow, as a side effect, you get implicit regularization effect. OK. So any questions? OK. So if there's no questions, let me move on to the final part of this lecture-- of this course, which is more about unsupervised learning reputation, and so on and so forth. So in this lecture and the last two lectures, you still have-- so basically, in the next 2.5 lectures, we're going to talk about unsupervised learning. There are not that many theoretical work about unsupervised learning. Of course, there are a lot of very amazing empirical works these days, but not that many are theoretical work. So what I'm going to do is that I'm going to start with somewhat kind of classical approach a little bit. So for this lecture and the beginning of the next lecture, or maybe a good portion of the next lecture, I'm going to talk about the classical approach-- I mean, a classical theoretical approach. So there are many, many approaches before, like, for example, the most empirically, probably-- before deep learning, the best empirical approach would be probably you do latent variable models with EM, expectation-maximization. But I'm going to for those kind EM algorithms, there are very little theoretical analysis. And even their analysis, it's kind of like special case, and it's not clear whether they can be extended to a complex case. So what I'm going to talk about is a different line of research, which uses the so-called moment method. So these kinds of methods don't necessarily work very well empirically, but they have very good-- you can analyze them in a very clean way. And these kinds of mathematical techniques are also useful for many other cases. So I think it's worth spending one lecture to talk about this approach. And it used to be the case that actually, around probably 2012, 2013, at that point, I think the community, the theoretical community, thought that this might be the new thing. This could be the new thing that you can both analyze and empirically work. It turns out that the analysis part developed-- got developed very well, but the empirical part is doing OK, but not good enough to replace the EM algorithms. At least not enough to replace them completely. And then I'm going to talk about some of the more modern work with deep learning-- with deep learning, like, for example, self-training or contrastive learning. So these are basically analyses in the last one or two years about some of the new algorithms in deep learning. So I'm going to spend probably the last lecture-- and the last 1.5 lectures on this. OK, so that's the plan for the next 2.5 lectures. And so today, I'm going to talk about a classical approach, right? And by the way, another kind of general comment is that in my opinion this unsupervised learning seems to be the core for many things, right? So this also relates to, for example, semi-supervised learning, where you have some unlabeled data together with labeled data. And this also relates to unsupervised domain adaptation. And my personal opinion is that all of these questions, what really you care about is really-- in both of these questions, what you really care about is how do you leverage unlabeled data. So in some sense, they all reduce this to unsupervised learning, in my opinion. So now let's get into something more concrete. So let's say-- let's have some setup. So this is the setup. This is with latent variable models-- latent variable models. So we are interested in those conflicting variable models, especially in a classical approach. So the formulation is that you have a distribution, p theta, parameterized by theta. How it's parameterized by theta, that would be-- there are many different ways, which I'm going to introduce a few of them. But every parameter decides the distribution, p of theta. And then you are given unlabeled examples. There's no labels anywhere. So you're given examples x1 up to xn. They are sampled iid from this distribution, p theta. And your goal is to recover theta-- or learn theta from the data. From the data. So that's the formulation. And p theta can be described as a latent variable model, or can be, or typically is described by a latent variable model. So everything that describes a generative model in some sense. So for example, I assume you somehow know roughly speaking what latent variable model is from CS29, but let me give some examples. For example, mixture of Gaussian. This is one of probably-- the most studied executions in machine learning. So the assumption is that in the most general sense, the theta is-- so the parameters describe a bunch of things. So let me write it down first. So you have a bunch of vectors, k vectors, and a probability-- a bunch of probability numbers. So each of these mu i in dimension d is the mean of the component. And p1 up to pk, this is a probability vector in the simplex, right? Let's call it theta k a simplex in k dimension, which is basically a set of vectors with norm one-- sorry, norm equals to 1 non-negative in dimension k, right? So p1 up to pk is a probability vector over k items. And given these parameters, what's the model? How do you generate? So this is my parameter, and how do you generate data? So it's a mixture of functions. So intuitively, you just want to model the case where you have, for example, something like this. You have several clusters of data, something like this. I guess you don't see the color in the data, you just see the raw inputs. The color is just to indicate which Gaussian it comes from. So mathematically, you say that you sample x from p theta by your first sample sum i, the cluster id from a categorical distribution defined by p. So i is between-- i can take values from 1 to k And then given the id, the cluster id, you sample a Gaussian with mu sub i, and then some covariant, let's say, identity. So actually, the covariance can also be a parameter you want to learn. But here, for simplicity, I just assume all the Gaussians have the same covariance just to make everything easier. So this is the latent variable model, so where i is the latent variable. This is something you don't observe in data. You only observe x. But given the latent variable, you can generate the data. So basically, there are two parts, where you first generate a variable and then generate data under the hood. And then in the other approach, which I'm going to define probably mostly when I'm use it-- I'm going to use it. So HMM, the hidden Markov model. If you take some NLP class, probably you have seen these kinds of things. Or ICA, independent component analysis. This is also something, I think, covered in CS229. And so there are many, many other kind of latent viable models, Bayes nets and so forth. So this is the final question we're going to study. And now let's talk about the approach. So the approach-- maybe before that, any questions? OK. So the approach we're going to study is the so-called moment method, which is actually pretty powerful. As an approach, there are some drawbacks, which make it empirically less appealing. But the approach itself, if you don't have a certain kind of aspects, then it's actually pretty powerful. And this is called Moment method. I think this method is proposed by, actually, an economist, or actually a few economists, to understand data from economy-- from economists-- from, I think, I don't know. I don't know, some kind of-- so the original source is definitely not machine learning. But then people use this for machine learning these days, with, actually, a pretty complicated approach. Actually, even though I think-- actually, I think I misspoke. The very original proposal of this moment method actually probably dates back to 19th centuries by some statisticians. And then actually some economists got-- even got the Nobel Prize by generalizing this modern methods to something like what we are discussing right now. Anyway, so let's see how does this work. So I'm going to just only-- I'm going to walk you through this kind of method by showing examples. So let's do the first example. So first example, let's talk about of mixture of two Gaussians. So you just have two Gaussians. And I think that-- and also let's say k is 2, right? And then let's also assume p1 and p2 are just a half. So these two quotients have the same probability. So they have the same marginal density. And also, with the loss of generality, we can assume the min is-- the average of the min is 0. So basically, they are just symmetric around origin. This is, in some sense, [INAUDIBLE] because which point you choose at the origin wouldn't really matter that much. So then mu 1, you can write mu 1. So let mu to be equal to mu 1, and then mu 2 is equal to minus mu. So basically, we only want to learn one parameter vector, which is mu, and the data comes from this mixture of two Gaussians. One Gaussian is min mu and covariance entity. Another Gaussian is min minus mu covariance entity. And the moment method-- so the general approach for the moment method is the following. So first, you estimate moment of x using empirical samples. I'm going to define what exactly a moment really means. Moment really means the-- I guess depending on whether you have any background-- I could always define what moment really means. And then what you do is you recover parameters from moment of x. And by moment, we really mean something like this. So the first moment, this means the average of x of the data. So the first moment-- and let's try to do this for this particular example. So if you do the first moment, then the first moment is the expectation of x. And what is expectation of x? There are two cases. One case is that you have a latent variable, which is 1, and the other case is the latent variable is 2. So you can look at the expectation of x for both of the two Gaussians, right? So with half the chance you come from the first Gaussian, and that's the case when i is 1. There's half the chance you come from the second Gaussian. And when it comes from the first Gaussian, the min is mu. So that's the definition, so you get a half times mu. When you come from the second Gaussian, the min is minus mu, so you get minus mu, which is 0. So this means that there is no information about mu from the first moment. Not so good. So this is not our plan. Our plan is to recover mu from the moments. But from the first moment, we cannot really get anything. So then what you do is you go to the second moment. So the second moment is the expectation, let's call it called M2. Maybe I should call it M1 as well. So a second moment is M2, is defined to be the expectation of the ultra paradox of x with x itself. There's this expectation of x and x transposed. So why is this called the second moment? This is really-- this is a matrix, basically. Basically, you can see that M2ij is the expectation of xixj. So basically, this expectation of the product of two coordinates of the data. And you organize all of this into a matrix and you call it M2. And if you compute the second moment, then you can see, actually, mu is-- you can kind of see mu from it. So how do I compute the second moment? Again, the same thing with half of chance, your x from the same-- from the first Gaussian, with the half of chance your x comes from the second Gaussian. And when it comes from the first Gaussian, so what's the covariance-- what's the second moment of x under the first Gaussian? So this requires a little bit of calculation. So let's do that here. So suppose x come from a Gaussian with min mu and covariance entity. What is the second moment? Maybe let's have a different letter for it so that we don't call it x. Let's call it z. So how do you compute this? So there are several ways. One way is that you just literally look at each of the coordinates and try to compute expectations. That's perfectly fine. So here I'm going to be a little lazy. I'm going to write that this is equal to expectation of z times expectation of z transposed plus the covariance of z. Because covariance of z is equal to the second moment minus the ultra product of the min. And the min is mu. So the mu-mu transposed, and covariance is an identity. So that's where we got mu-mu transposed plus identity. And then for the second-- so basically, you get a half times mu-mu transposed plus identity. And then for the second Gaussian, actually, the moment is the same, just because mu and minus mu is the same if you square it. So you get a half times mu-mu transposed. So eventually, you get mu-mu transposed plus identity. OK? So now it looks good, because at least mu seems to come-- mu can be, in some sense, read out from the moment, right? So if you get the second moment, you subtract i, you can recover mu, right? So basically, what you do is you say-- first, you estimate M-- but you don't necessarily know M2 exactly, right? So you estimate M2 by the empirical samples. So what's the empirical samples? So you define this empirical moment as the empirical second moment. And then you recover mu from M2 hat by pretending M2 is the same as-- M2 hat is the same as M2. So for example, you can recover mu by-- how do you do this? One way to do it is you can subtract i from M2 hat and then try to take the square root of it. And here, I'm going to do one. So basically, the key thing is that-- so how do we recover? So let's do a warm up. So I guess in some sense, to recover it from M2 hat, the first thing you want to make sure is that you can recover it from M2. So this is kind of like a premises, can you recover mu from M2, right? And we have argued that this is actually true because you can just subtract i from M2 and then take the square root. There's another way to do it, which is-- so another way, which is the spectral method. I'm going to introduce this here because it's going to be useful for the future cases. So how do we recover mu from mu-mu transposed plus identity? What you do is you take the top eigenvector of M2 is, actually, equal to mu over the norm of mu. Let's got this mu bar. So the top eigenvector of M2 is actually exactly in the direction of mu bar. And this is something-- and also the eigenvalue is-- the top eigenvalue is mu 2 norm squared plus identity. And this is something you can verify relatively easily. So because eigenvector of mu-mu transpose is mu bar, and then eigenvector of mu-mu transpose plus identity is the same. This is just because if you add identity to any matrix, you don't change the eigensystem. You don't change the eigenvectors. You only change the eigenvalue. The eigenvalue got increment by one. That's what happens when you add identity to any matrix. So you can see that from M2, you can recover mu, either using a simple subtraction and square root, or you can do this eigendecomposition. And this is the case-- actually, this, actually, also corresponds to the infinite data case. Because when you have infinite data, you can literally compute M2. Because the average will be exactly equal to the population. So now, the question becomes, what if you don't have infinite data? You don't have M2, you only have M2 hat. So basically, you need-- recover from M2 hat, basically, using the same algorithm-- using the same algorithm, on M2 hat. So basically you just use the same eigendecomposition on M2 hat, and you need this algorithm to be robust to errors. Robust to errors in the sense that if you have two matrices, M2 and M2 hat, that are similar, then applying this algorithm will give you similar answers. So if that's the case, then you get similar answers as if you computed on M2. So you got to an approximate estimate for mu, right? And it turns out that this robustness thing is often OK, at least in a qualitative sense. For most of the algorithms we're going to discuss, they are robust to some errors. So actually, the most important thing would be this. So we're going to focus mostly on the infinite data case. So we're going to focus on infinite data case because most of the algorithms is robust to ours. So the algorithm analysis part is important if you really publish the paper, but for the core idea, you don't have to really do the algorithm analysis, because most of the algorithms are reasonably robust. Any questions so far? So basically we have completed our discussion about this mixture of two Gaussians. And now let's deal with a mixture of three Gaussians. And you can see that the point will be that you cannot just only use the first moment and second moment. You have to actually go to the third moment, and it will make things a little bit more complicated. So maybe the general approach is to-- is that you compute M1, which is the expectation of x, M2, which is the expectation of xx prime, and M3. What is M3? What's the third moment? M3 is the expectation of x tensor x tensor x. If you are not familiar with this notation, so x tensor x tensor x, this is the third-level tensor of dimension d by d by d. And so let's say this is called T. So then T is the third-level tensor, and the ijk entry of this tensor is equal to xi times xj times xk. So in some sense, if you do the-- so x tensor x is basically just a rewriting of xx transposed, and x tensor x tensor x is defined like this. And you can also have a tensor, b tensor c. So suppose T prime is equal to a tensor b tensor c. Then T prime ijk is equal to-- definition would be ai times bj times ck. So that's why in this sense, if you look at M3, ijk entry then this is the entry of the ijk entry, which is expectation of xi times xj times xk. So basically every entry of this third-order tensor M3 is the expectation of the product of three coordinates of the data. And you can do this even for M4, or-- M4 and M5 so on and so forth. And then you design an algorithm and let's just call it A, script A, that takes in the moment and outputs theta. So you want to recover from the moment the parameter theta. And then if you can do this, then the last step will be that you have to show A is robust to errors, and then apply A to the empirical moment. And how many-- what is the order of the moment? So this is in reality. So this is the final algorithm. So apply A to the empirical moment, that's the final algorithm. All the previous steps are the process of designing the algorithm. So basically, what is the order of the moment you have to use, right? So do you need third-order moment, fourth-order moment? That depends on from how many moments you can recover the parameter theta. If, from the first moment, second moment you can recover, then sure, two moments are fine. If you need three or more moments to recover, then you need M3. Otherwise, you probably even need M4. In fact, in some cases indeed we need M4. I guess in-- yeah, I think even in a case that we're going to discuss, we need M4 for the first tensor-- the first tensor. Any questions? OK. So I think I have only-- less than-- about-- oh, I have about 15 minutes. So I'm going to show that-- let's talk about mixture of high Gaussians, and I'm going to show you that you actually need at least the third moment when the number of components is not just two. And this is very typical. In most of the cases, you need at least a third moment. Actually, it's not very easy to find a case where second moment suffices. I have to think about which case second moment suffices for when I found this two-component mix of Gaussians. In almost all other cases then you need a third moment. So let's assume-- again, let's make it simpler. So let's assume that this is a mixture of Gaussians with a uniform mixture. So all the components have 1/k probability to show up. So basically, you would sample i uniformly from k and then you generate x from Gaussian with min mu i and covariance identity. This is a generative model for our data. And alternatively, you can probably write x is sampled from this, the average of this k distributions. And in all the follow ups, we are going to only do a and b. So we only do a and b, the step a and step b for all examples in the SQL, even including examples in the next lecture. The robustness, you can do that, but it requires too much mathematical jargon, which is not really needed for this course. And A and B is really the gist-- is really the core thing that enables this. So now let's try to compute the moment and see which moment is enough for us to recover. Again, let's compute the first moment. So this is-- we have k possible cases. So each case arises with x probability k-- probability 1/k. So each cluster shows up with probability 1/k. So condition on the cluster i, your min is mu i into some of them-- this is 1/k times the sum of mu y. So clearly from the first-order moment, the first moment, you only know the average of the min. You probably wouldn't be able to recover each of the mins. That sounds reasonable. And now let's look at a second moment. The second moment is-- I guess we still do this total law of expectation. Your condition on the hidden variable i, the latent variable i, and you can set the moment for that Gaussian. And we have shown that for every Gaussian, the moment is-- second moment is mu i, mu i transposed plus identity into the sum of i to k, right? And then this is 1/k times sum of-- basically, this is the average of the outer product of mu i mu i transposed plus identity. So the question becomes-- suppose you just want to use the first moment and the second moment. The question becomes, can we recover from M1 and M2? Or maybe more specifically, from the average of mu i and the average of mu i mu i transposed. And the claim is that this is not possible, at least when k is larger than 3. So there are two arguments. I guess there is one argument. The argument is the following. The reason why this is not possible is that these are just not enough information for you to recover, in some sense. So there's-- so you're still missing some kind of rotations-- missing rotation and likely information. What does that really mean? Let me specify. So suppose you-- just to make the discussion easier, let's define u to be this collection of mins, mu1 up to mu k, which is in dimension d by k. So this is the matrix you want to recover. And I'm claiming that there are going to exist two sets of mus that have the same average-- the same qualities here, these two quantities. Both of M1 and M2 are the same, even though the mus are different. I'm going to construct such a situation. How do I do that? I'm going to take a rotation matrix R, in dimension k by k. I'm going to rotate-- so basically, I'm going to consider u versus u times R. If you rotate on the right-hand side, you got a different type of set of means. So I'm going to claim that u and u times R have the same statistics, have these same two quantities. So first thing is that if you look at the average of the outer product, mu i mu i transposed-- sorry, it's 1/k, then this 1/k times uu transposed in our simplified notation. And this is equal to 1/k times uR times uR transposed. This is just because RR transposed is equal to entity. That's the definition of rotation. So that means that u and UR not distinguishable from this quantity, from the average of the product of the mu i mu i transposed. So now let's look at the first-order moment. So to make the first-order moment also not distinguishable, I also have to take another-- take in addition R, such that R times L1 vector is still equal to L1 vector. So you want a rotation such that you don't-- you want a rotation, but you don't want to rotate the direction of L1 vector. That's easy. You have so many rotations. You can-- you just want to say, I am going-- it's like if you have a globe, you have one direction, which you don't change. But you can rotate still in other dimensions directions. There are still k minus 2 directions, because the dimension is k here. So you still have a lot of degrees of freedom to choose many different Rs that satisfies this, as long as k is somewhat big, like larger than 3. And then suppose you satisfy this. Then uR times L1 vector. Maybe let's write this. So 1/k times sum of mu i. This is one 1/k times mu times L1 vector. I'm claiming that this is equal to 1/k times u times R times L1 vector, just because I designed R like this. So that's why-- from this average column statistic, or from this quantity, you still don't distinguish u and uR. So u and uR are not distinguishable because they exactly match the first moment and second moment. So that's why we need to go to M3, to distinguish-- to uniquely identify the columns of u. OK. I think we are five minutes early, but I think the next thing would be-- probably takes much more than five minutes, so I guess I would just stop here to see whether there's any questions. And tomorrow-- next lecture, we will continue with solving this question with M3. Any questions? Is there a [INAUDIBLE]? Yeah. So the question is, how do you infer the number of Gaussians? So first of all, indeed you are right that in the formulation right now, I am assuming I know exactly the number of Gaussians. I'm even assuming that I know all the probabilities for each Gaussian, right? p1 up to pk are exactly just 1/k. And the question is, how do you kind of infer enough Gaussians. Maybe also another question is, how do we infer the p1 to pk? So there are ways. Of course, there are ways, depending on-- there are various ways depending on what assumptions you make. But definitely it's possible. For example, one somewhat-- one way that would work in certain cases is that you can infer the number of Gaussians by looking at the rank of this matrix. Suppose you believe that all the mu i's are not like degenerate. These are not-- they're all in general positions. So then the rank of this matrix will be k, especially when k is less than b, right? So then you can infer number of Gaussians, k, by looking at the rank of this matrix. But I'm not saying that that's actually really a great method because empirically, we got into other issues because maybe your condition is not exactly matched and so forth. So there are many other ways. And empirically, the most typical way to estimate the number of Gaussians is using non-parametric-based methods, which I guess is not something we will cover here. So for the theoretical setup, we are mostly interested in a clean setting where you know everything, and it's still an open question to recover the mu i's, even with a knowledge of the number of Gaussians. [INAUDIBLE] Right. So as long as that happens, it wouldn't work. So that's why it's probably not a great idea. But actually, typically, if you really got high dimensional data, the mu i's typically they are independent. But they could have some kind of-- one of them could live in approximately the subspace of the others. So then it becomes tricky because whether you are robust to errors so on and so forth. Yes. So I think loosely speaking, it's reasonable. But if you really look at the details, it's not that great. So that's why you need other methods sometimes. I guess if there's no other questions, I'll see you next Monday. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_13_Neural_Tangent_Kernel.txt | OK, guys, let's get started. So I think last week I spent some time reading the feedback from the survey. I've been going through all of them. So I guess I'm not going to discuss every points there. All the points are well taken. And thanks for all the very helpful feedback. And for some of those, I'm going to improve. I guess there are also some other conflictory requests, which still are very understandable because different people have different preferences. That's completely fine. But I guess I'm just saying that it's not that, oh, I can't address all the possible requests just because sometimes there are some constraints. But of course, sometimes I think even conflictory requests can be addressed if you are creative. I will try to do that as well. I guess there's one thing I want to discuss a little bit, which I think might be useful for you is not trying to find any excuses for the lecture. But I think some people mentioned that it's a little bit hard to follow the notes, well, at least in the lecture. I can completely understand that. I wrote pretty fast, which I'm going to slow down a little bit, at least to make the layout and the format a bit cleaner, easier to read. But I think, in my opinion, this-- of course, I'm not saying that you have to really follow my way to take courses. I typically don't take a lot of notes. I think at least this course I tried to design so that you don't have to take all the notes yourself just because we're going to have Scribe notes later. And some of the Scribe notes are already there. And when I listen to a theoretical lecture, I try to think more so that I can remember them in my head a little bit. Because I feel that, at least for me, it takes too much energy for taking all the notes. I'm not sure that this is useful for everyone. I don't think it can be useful for everyone, but maybe you can try it a little bit just to see whether it's easier if you take a bit less notes and try to remember a little more. Otherwise, I'm going to slow down a little bit at least in terms of the writing for sure. And also, probably I'm going to kind of slow down a little bit in terms of the overall pace a little bit as well, given some of the feedbacks saying that some of the lectures are a little bit too fast. And also, another thing is that I think the homework questions-- you know, indeed, some of the questions, I think I probably made the mistake that a few subquestions are a bit too difficult. They were bonus questions in the past offerings, and this quarter I thought that you have a team of three people. So maybe I can put them as regular points. But still, they are probably a little bit too difficult. They require some kind of fix, as you probably noticed, some [INAUDIBLE]. Right. So, yeah. But I guess-- I checked the last homework. I think there is nothing like that. Most of the questions probably shouldn't require anything super special tricks about common topics. And I guess another thing is that, if you want to take some bonus points, I guess there are other ways, for example, do some Scribe notes, improve existing lectures. If you don't care about A-plus, I think the bonus point-- the bonus point is always the same as the regular points, in some sense, if you look at the grading policy. At least, from your perspective, it's worth the same as the regular points. Basically, the grading policy is that we first decide the cut off before the bonus points. And then the bonus points can only make you have better letter grade. OK. Anyway, so, yeah, I guess there are other very important, very nice feedbacks, which I'm going to incorporate as well in the lecture. I'm not going to discuss all of those points just to save some time. OK, so maybe let's get into the technical part if there's not any other questions, other discussions. So I guess, last Wednesday, I was sick. And we asked you to watch the video online. And roughly speaking, what we did in the video is that we talk about this optimization, like a nonconvex optimization. And I think so the main point there was that, if you have the so-called property of local minima, all local minima global, then you can find global minima. Of course, there are technical things, like there is so-called strict set point, which we discussed in a video. And there are other kind of things that are a little bit subtle, but this is the main point. And so, basically, you only have to show that this is true, and then you can find a global minima of the nonconvex function. This kind of search from a broader kind of point of view is successful. And in some sense, what I'm going to discuss next is another example of this. However, there are some kind of special subtleties. So basically, what we showed last time is that this is really true globally. The statement, all local minima are global, are basically-- this is a true statement for the entire space. And today, what we're going to discuss is that you're only looking at a special part of the space. So in some sense, the function we are going to discuss today looks like something like this. You have some kind of complex part about this function, which you don't know how to characterize. But you identify a small part where this is true. You look at a special region where all local minima are global. And there is actually a good global minimum there, so then you just only work in that region. And that's kind of the connection to the previous lecture. There are other issues with this kind of approach, I guess, we discussed a little bit in one of the outlining lecture. The limitation would be that you identify this region where everything is nice. The landscape is just so nice. But is this the region you really care about, right? So if you really care about finding a global minimum of the tuning loss, then yes. This has to be the region. Because you find a global minima of the tuning loss. But if you care about other properties, like generalization performance, then it might be not the right region that you should focus on. But for today's lecture, we don't care about that. We just say we're going to go through what this works, and then we talk about limitations. And in the future lectures, we're going to talk about ways to, in some sense, improve upon this or kind of like fix the issues of this kind of approach. OK. So that's a very rough kind of a high level overview. And also, by the way, if you haven't seen my notes or announcement on Ed, so there are actually two videos that we ask you to watch for making up the last lecture. So one of them is a full lecture, and the other one has 15 minutes. So they are about this nonconvex optimization, all local minima, global minima kind of phenomena. And this does relate to one of the homework questions. The question itself is still, in some sense, self-contained. But I think it's useful for you to know the basic idea, even the basic proof ideas in those two videos so that you can see better how do you do the homework question. OK. So today, let's talk about the thing we are-- about the special region thing. And this is also often called neural tangent kernel approach. I guess the name doesn't really-- so far, just think of this as a placeholder. I'm going to explain why this is called neural tangent kernel. So the basic idea is that you look at some special place around a neighborhood of your initialization, and you do some Taylor expansion. So Taylor expanding-- and this works for any nonlinear function. So suppose you have you have a nonlinear-- or even linear, but non-linear would be the most interesting case-- a nonlinear model f theta x. And then you do a Taylor expansion around initialization. Say that's 0. And when you Taylor expand the model at the initialization-- so your model is f theta x. You Taylor expand with respect to the parameters, but not input. So the input is fixed, and the parameter is the variable. So say that 0 is the reference point. And then you look at the gradient with vector theta evaluate at theta 0. This is the first order gradient times theta minus 0. So this is the first order Taylor expansion. And then you say you have some higher order terms, which we are going to ignore. And once you do this, you can define maybe this one. Let's call this g theta x. Of course, it also depends on theta 0. But let's say theta is the variable, so that 0 is fixed. So this is a function of theta. So this is a linear function. So if you define this, then g theta x is a linear function in theta. Because where theta shows up-- see, it only shows up here. And it shows up linearly. And basically, you linearize your model. And you can also, I guess, define the other theta, which is the difference between theta and theta 0. I guess, technically, you should call this-- maybe this is a affine function because there is a constant term, affine function. In theta or in the other theta, they are not too different. I guess just want to introduce this notation, delta theta. And so f theta 0, this reference point, this is a constant from this perspective, right? It's a constant that doesn't depend-- constant for fixed x. It doesn't change as you change theta. And in some sense, this is just not that important, so not very important. Because it's a constant. And sometimes for convenience, you choose-- so choose theta 0 such that f theta 0 x is equal to 0 for every x. How do you do it? So you do it-- so if you really want to do this, you need to-- for example, what you can do is you can design network that you split your networking into two parts. So maybe you have-- suppose before you have a network, you have all of these connections. And then for the second layer-- maybe for some layers, split it into two halves, right? So you have something like this and then something like this. And you do the same thing in these two halves, exactly the same thing in these two halves. And then you put plus 1 here and minus 1 here, so that they got canceled. So that you have a still somewhat random initialization, but the initialization has a functionality that the functionality of this initial model is 0. I'm not sure whether my drawing makes any sense. I see some confusion in your face, but this is supposed to be something simple. For example, let's say you have some of linear models, some of-- sorry, two layer networks, sum of ai times sigma of wi transpose x, i from 1 to i. Suppose this is a model. And what you can do is you can say you added to minus ai sigma wi transpose x. So you have 2n neurons. And the wi's the same, and the ai's are paired. So then this becomes 0, right? So they have 2n neurons, and one part is the same as the other part in terms of w. And in terms of a, they are negation of each other. Then you make it 0. And you still have relatively good randomness, right? You can still choose wi to be random as long as these are all wi's and these are ai's. So anyway, this is a not super important point. And also, even you don't do this, you can still somewhat kind of get away from it because this f theta 0 x is a constant. So basically, from now on, we're going to assume this f theta 0 x is 0 in most of the cases. And if you think about this, right-- so basically, this is saying that y prime-- suppose you take y prime to be y minus this constant, which we are going to assume is 0. But so far, I said for this equation I think we can still think of as generic. So then you get this is a linear function, theta transpose. So it's going to be grad theta f theta 0 x times 0 theta. And this becomes a linear function 0 theta. So this you can think of this as the parameter, and this you can think of this as a feature map. So this is the same as the feature map phi of x that we discussed, for example, in CS229 when you have a kernel method. And while this is a feature map, this is something that doesn't depend on the parameter, right? So theta 0 is fixed already. So f theta 0 of x is really just a fixed function of x, right? So this is a fixed let's call this, of x given the architecture and theta 0. But it doesn't depend on the delta theta. So in some sense, it just becomes kernel method. And this-- so you can define-- whoops, what's going on? So I guess, for simplicity, if you assume f theta 0 x is 0, then y and y prime the same. So basically, you are fitting a linear function onto your target. And this becomes kernel method. You can define the kernel k x, x prime to be the inner product of features of the phi of x transpose phi of x prime, which is the inner product of these two gradient. And why this is called neural tangent kernel? The reason is that this is the tangent of the network. It's the gradient of the network. I think then that's why it's called neural tangent kernel because the feature is the gradient of the network. Anyway, the neural tangent kernel is just the name. OK. So suppose we just use model-- we just use g theta x instead of the original model. Then basically you just got kernel method, a linear model on top of the feature, right? And in the loss function, so suppose you believe that theta is close to theta 0, so 1 theta. Then you can also kind of intuitively say, OK, my original loss function, which is a function of the model output and y, probably is approximately close to my new loss function, right, which is g theta of x and y. And this is linear. And the whole thing is convex because l is convex. A convex function composed with linear function is still convex, right? But this is when theta is very close to theta 0. And so, basically, the question-- so the remaining thing is just really that, so how valid is this approximation? Because everything sounds nice. After we did this, everything becomes super easy. But in what cases this can be valid? Go ahead. [INAUDIBLE] the inner product [INAUDIBLE].. Yeah. So the inner product is just the typical inner product. You just take the-- because these two are just vectors, right? [INAUDIBLE] OK, so what's the dimensionality here? So gradient of, say, this thing, is in a dimension that's a P if theta is in RP. So it has the same dimensionality as-- OK, I guess it also depends on what f is. So let's suppose f is from some RD to R. D is the dimension of x. But the point is that the output is one-dimensional. And then you take the gradient with respect to theta. You get a P-dimensional vector, where P is the dimension of the theta. So the gradient with respect to theta has the same dimension as theta. That makes sense, right? So this is a P-dimensional vector, and you just take inner product of two vectors to define a feature. Makes sense? Cool. OK? So I guess to proceed, I'll define two notations just for the simplicity. So let's define L hat f theta to be the empirical loss with the model f theta. This is just a formality so that we can write this easier in this. And L hat g theta is the loss with the model xg theta. OK? So the key idea is that-- so in certain cases, this Taylor expansion makes sense, right? So I guess, the Taylor expansion can make sense, can work for certain cases. Here, we're going to hide-- what for what cases it makes sense, can work, it's going to be a big question that we probably will discuss at the very end. But so far, let's say just let's see how does it work. So the way that they work is the following, so in the following sense. So how do you say it works, right? So you say that there exists a neighborhood of theta 0 such that in this neighborhood-- so let's call this neighborhood b theta, theta 0, such that-- several things happens. So one thing is that you have an accurate approximation in terms of function value. So the f theta is somewhat close to g theta of x. And as a result, L hat f theta is close to L hat g theta for every theta in this neighborhood B theta 0. So that's something you want, which makes sense, right? So this is the point of Taylor expansion. You want to approximate original function. And also, you want that it suffices to optimize in B theta 0. Because if, in this B theta 0, there is no good-- maybe let me draw this again. So basically, what we are saying is there is a neighborhood that's got B theta 0. And this neighborhood, first of all, you have-- say suppose your empirical loss is look like this. And maybe there's something else happening somewhere else. We don't know. So first of all, if you do the quadratic approximation, using Taylor expansion on theta 0. Let's say this is theta 0. You do a quadratic expansion. It looks something like this, very close. This is my drawing. So basically, you can think this red one, this red one, is g theta of x. And the black one is f theta of x. So the quadratic expansion is very close to the original expansion-- sorry, the original function. And second, you want that it suffices to optimize here, right? Because if both the red and the black curve, even though they are close, if they are both very high, it doesn't make sense to zoom into this region, right? You should leave this region. But you can say that it suffices to optimize here in terms of following sense. So there exists an approximate global min theta hat in B theta 0. So I'm using the superscript for the 0, which might be a mistake. But let me consistently use that. I think in some other lectures I use superscript for time. So that's why I keep using superscript for time. Anyway, so you want to have a theta hat such that it's global min. And actually, here, you want that L hat g theta hat to be approximately 0. And this indicates that you are global min because 0 is the minimum. There is no way you can go below 0. So if you are close to 0, it means you have to be close to global min. And this also implies that L hat f theta hat is close to 0. But with these two, we still don't really understand how do we optimize the black curve, right? So you also want to know that optimizing this loss L hat f theta is similar to optimizing L hat g theta. And not only this, but also-- and does not leave B theta 0. Because if you leave B theta 0, then all bets are off your Taylor expansion breaks. So you have to say that, when optimize, either the L hat f or the L hat g, I don't leave this region. So everything is confined to this region. And so this is how we make it work. Of course, you can ask whether this is really reflecting what happens in reality. The answer is no, not always. But so far, we are just trying to make this work under certain cases, so that we can appreciate why we have to improve this way. So in some sense, 3 is kind of a little bit like extension. So 3, to some extent, follows from 1, 2. Because if you have a global minimum in this region, right, and the black and red are close, then why optimizing it? Optimizing probably should converge to that global minimum, and you should stay in that region. To some extent, it follows 1, 2, but not exactly technically, but still requires a formal proof. So what I'm saying is that, if you really just want something somewhat informal to think about the dependency, then probably you only have to first make sure 1, 2 is happening. But if you really want everything, then you need to prove the three. And 1, 2, 3, you can make this work, can be all true in various settings with either over parameterization and/or some particular scaling of the initialization. So if you play with the initialization or you play with the width, and also, you need small stochasticity, even small or even 0 stochasticity. So if you play with the overparameterization and scaling of the initialization and also insist that there's no stochasticity that make you leave or go very far-- because the stochasticity will let you leave the local neighborhood. So that's why you want small stochasticity. So then you can achieve all of this. And how do you get small stochasticity? In a nutshell, you either need by-- so to get small stochasticity, you need to either do smaller linear rate or full batch gradient descent. So in some sense, this is the limitation, right? So this is the limitation because you require this. And this is the also limitation because you have to play with it. You cannot just say-- and what you really eventually get is probably not exactly matching what people do in practice. OK, cool. So now, let's see how do we do 1, 2. But still, regardless of all the limitations, still this is kind of an interesting approach. It's kind of surprising that such a region even exists. Even you think about the 1, two2 right, so you don't care about any limitations of it. It's still kind of interesting that there exists such region that you can basically be close to a convex function, actually a quadratic function if the loss is quadratic, right? And there's still a global minimum. It suggests that there's a lot of flexibilities in this landscape of neural networks, right? So when you have a lot of overparameterizations and nonconvexity, then somewhere you have to have a convex region, right? So that's basically what it's saying, right? So in this landscape globally, it's very nonconvex, very complicated. But at some special places in some neighborhoods, you are really having a convex function. And that convex function has a global minimum, which is 0. So even this is still somewhat surprising. OK. So now, let's try to formalize 1 and 2. And then we talk about 3. So how do we do this? So let's introduce some notation. Let phi i to be the phi of xi, the features for the i-th example, which is really this. And I defined this feature matrix to be phi n transpose. You put all the features in a row in this n by p where p is the number of parameters. So now, we can see that the loss functions with respect to the linear model is just the linear regression problem, which you are probably familiar with. And I'm taking quadratic loss or mean square loss. So this is just the yi minus delta theta transpose times phi of xi-- recall that you have basically linear model in delta theta-- and squared. All right, so I guess maybe it's easier to write in the other way so that it's more consistent with the notation here-- transpose delta theta. So if you write in the matrix notation, this would be 1 over n times the 2-norm of y minus phi times delta theta 2-norm squared where y vec is the concatenation of all the labels, which is the Rn. So this just sounds very familiar with linear regression. It's exactly linear regression where delta theta is your parameter, phi is your design matrix or the feature matrix. And let's assume-- this is just for convenience-- yi is on the order of 1. So that's the 2-norm's y is on the order of square root n. So here's the lemma that kind of characterize what is the-- I guess, so lemma-- this is in sum for two. And sometimes you are trying to see that in what neighborhood you have a global minimum. So suppose p is bigger than n. You have more features and then more theta points. And the rank of this feature matrix, it equals to n. And the minimum singular value is equal to sigma greater than 0. Then let the norm solutions to y hat to this. All right. So you want to fit phi delta theta 2 y vec. And you want to understand what is the nearest global minimum, right? So the other thing is this is the nearest global minimum. This is the nearest global min in some sense, right? Because if you fit it, you are achieving the global min. And you want delta theta hat to be the smallest, so that means you are looking for the nearest one. And if you are looking for the nearest one, then you can have a bound on the nearest global minimum where the bound is something like this, square root and over sigma. So the bound itself, so far, is not that interpretable. But the point here is that this means that, if you take the ball, B theta 0, to have this radius, to be all the theta such that this goes theta 0 plus delta theta such that delta theta 2-norm is less than of square root n sigma over sigma, then this ball, this B theta, will contain a global minimum. Contains a global minimum. OK, so this is characterizing how large the ball needs to be, how large the region needs to be, so that it can contain a global min. And a number here, so far the number is not interpretable. I'm going to compare it with some other things. Because by itself, you know, how large the region? If you just care about 2, then you can just take the region to be as large as possible. You have to compare it with something else. And the proof is also pretty easy. This is really just a simple trivial thing. Like you say you can write delta theta hat to be-- because you are-- the minimum norm solution is the pseudo inverse of phi times y vec. And there are some-- I guess, this is not extremely obvious, but you can invoke this is some relatively basic properties of the pseudo inverse. You know that the operator norm of a pseudo inverse is less than the minimum singular value of phi. Actually, these are-- I think they're exactly the same. And this is equal to 1 over sigma. And then you know that you have a bound on delta theta 2-norm by using the operative form of the pseudo inverse of phi times the 2-norm of y vec. So this becomes 1 over sigma times square root of 2. That's it. I guess I don't even need a big O. I don't know why I have the big O, sorry. Just for me, it's always safe to have big O, so it's just part of my brain. You cannot work with without big O anyway. But here, you don't need anything, any constant. Oh, I guess there's a-- I think I need a big O because I'm only assuming that y is on the order of square root n, sorry. So because here I'm only assuming this is-- y is less than O from square root n. So that's why I need a big O. But anyway, the constant doesn't matter here. You get the points, I guess. OK, so any questions so far? So now, let's see whether this region, whether it's too big or too small. It sounds somewhat big because n is there. But actually, you'll see that the region is not that big because the sigma could be made very big in some sense. Or there are some relative kind of things which you have to compare it with something else, right, because you have to compare this with how good you have approximation in the region. So next one is for the lemma. So this is for one in some sense. So suppose this is beta-Lipschitz. Suppose this gradient of the network is Lipschitz in theta in a sense that, for every x, for every theta and theta prime, you have this. So I think this is 0 because we always only care about the gradient at theta 0. You evaluate it at theta 0. Wait, sorry. My bad, my bad. Sorry, my bad. OK. So here, what I'm writing here, this is a function of theta because I evaluate at some arbitrary theta, let's say. So I want this as a function of theta to be Lipschitz in theta. So that means that, if you choose two different place where either theta or theta prime, the differences between them is L2-norm. I have to use our L2-norm here because they are vectors. And you want to say that L2-norm is bounded by the differences in the theta space. So if you have this, then we know that, after the x minus g theta x, your approximation is less than big O of beta times the difference on the delta theta, 2-norm squared. Because the difference between these two is basically, in some sense, depends on how far you are away from the reference point. The reference point should be exactly the same. And if you are a little bit more away from the reference point, then you're going to incur some loss. And the loss is something on the second order. That's also intuitive. So the important thing is that, for every theta in the B theta 0 that we just defined, we have that f theta x minus g theta x is less than beta n squared over sigma squared. And that's just by plugging in the definition of B theta 0. The B theta 0 has this radius. square root n over sigma. And you plug it in into this here. So you plug in this here. You get that in this region, you have some bound on how good your approximation is. So-- Is that for beta n? Oh, sorry. Let's try this beta n, my bad. It's just a copy pasting error, OK-- yeah, beta n squared. OK, so far this bound-- so I saw a question. By the way, you can feel free to unmute, but I can read the question now. So how do we define phi superscript plus? Oh, this is the-- so what is this phi plus? This is the pseudo inverse of phi. I think this is called-- there's actually-- this is the most common definition of pseudo inverse of phi. I guess you can roughly think of as the inverse of phi with some small caveat. Yes, more tangential to the inverse. Thanks for the comments in the chat. I think this is supposed to be taught in the linear algebra course maybe. I don't know. I'm not sure what I can say about it. What you know about it is that, at least in this case, I think maybe just for the sake of simplicity, just think of the pseudo inverse as the inverse if you are not super familiar with it. And then you can verify this is a good solution to this equation, right? Because if you multiply to the inverse, you get this. So the inverse cancels with phi, and you get delta theta. Sorry. So you plug in this delta theta to this equation. You can cancel phi and phi to the inverse, and you get y vec. That's how you verify this is a solution to the equation. And also, I think another useful thing to know is that the pseudo inverse has exactly the same-- it has the inverse of the spectrum of the original one, right? So suppose the phi has singular value sigma 1 up to sigma k. And then the pseudo inverse has singular value 1 over sigma 1 up to 1 over sigma k. And this, you know, if all the sigmas are positive, right, you ignore the 0 singular values, then this is exactly true. So the singular values are just typically inverted. OK, cool. So I hope that answers the question. OK, going back to the second lemma for number one, so this is saying that, in this neighborhood, how good your approximation is, right? So we got this number. So I'm going to explain this number. That's the important thing. So how small is this? If this is small, that's great. If this is big, that's a problem. But maybe let me just say the proof of this lemma. The proof of the lemma is kind of basically this follows the basic fact that, from the fact that, if you have h theta satisfies gradient of h theta is Lipschitz. And this gradient of h Lipschitz is basically equivalent to the Hessian and operator norm is bound by beta. If everything is differentiable, then you know you can bound the inequality of the Taylor expansion. So you can say that g theta minus g theta-- h theta 0 minus gradient h theta 0 theta theta 0, this is bounded by O of beta theta minus theta 0 2-norm squared. And this h theta will be just f theta x. In our case, if you take h theta to be f theta x, then you get the lemma above. So the point is your approximation error is second order in the order of theta, in the difference between your point and the reference point. OK. And there's a small remark. Another small remark is that, if the f theta involves ReLU, then nabla f theta is not even continuous. So it cannot be Lipschitz everywhere. And this requires some special fix. So it requires special fixes. The fixes is not that surprising just because-- and even though it's not continuous everywhere, it's still continuous almost everywhere. So basically, it's kind of close to be Lipschitz. And in some sense, L f theta x is still-- like, if you look at the average over data points, then you still have some Lipschitzness. But I think let's not discuss that. It's a little bit kind of like low level details which is not important. We can just assume we are dealing with not ReLU. We are dealing with something like sigmoid, then there is no such issue. OK, cool. So now, let's go back to the main thing, right? So the main thing is whether this is a good bound, right? So you say that you have found the B theta 0. And you have showed that, in this B theta 0, you have such an approximation error. So important fact is that what is this beta n sigma squared. Is this small or big? And the important thing is that-- so the interesting thing is that this thing is not scaling invariant. So n is something you cannot change, right? But beta over sigma is not scaling invariant. So what does that mean? I think you can interpret this in some way. But in some sense, that beta-- basically, I think the easiest way to think about is that you have a square, and below it you have beta, the beta on top. So somehow you can play with the scaling to make this going to 0. So there are two cases. Actually, there are more than two cases, but I'm going to discuss two cases. These are different papers, but I'm going to unify them in the following way. So there are two cases where beta over sigma squared can go to 0. So the first way is that you can reparameterize with a scalar. And this is in Chizat and Bach. I think this is '19. And the paper is called Lazy Training of Neural Network, something like that. So I guess the paper title suggests that they're saying that this is a lazy way of training networks. It's not really the final way you should leave it. But nevertheless, the paper is very nice. And what they do is the following. So they say that so let your f theta x, your parameterization, to be the following. You take alpha times, let's say, f theta bar x. And this, let's make this a fixed-- actually, this is a standard neural network and fixed, fixed in the sense that you don't change the architecture, right? You just take whatever standard network with any finite width, with fixed width and depth, so and so forth, right, something that you don't change. And it's for this perspective. And you only change alpha. So for every alpha you define perfect, it's a valid network. It's just you have a different scaling in front of it. So for every alpha, you got a neural network. And then let's see how does everything change as you change alpha. And also, you fix initialization, scheme theta 0. And then let's consider, let's say, sigma bar is the sigma mean of the base network. Let's say the base network is the f bar theta. So it's the sigma mean of this base 1. This is the-- right? And let beta bar be the Lipschitzness also of the base 1. So you can think of sigma bar and beta bar are not changing as you change alpha. And now, let's see how does the alpha change the final sigma and beta of your final network. So sigma is equal to alpha sigma bar. Because once you have considered f theta, you multiply this alpha. So all the features, right, like the gradient, becomes alpha times bigger. And everything becomes alpha times bigger, right? So this is just because, when you take the gradient with respect to-- this is just because of 2, right? So if you take gradient with respect to theta of the f, it's the same as alpha times the gradient of theta with respect to-- the gradient of f bar with vector theta, right? So you have a chain rule. So everything got scaled. And beta also got scaled by alpha just because the gradient got scaled for the same reason. And then you can see that you get for free some factor about alpha in this equation. So beta over sigma squared becomes beta bar over sigma bar squared times 1 over alpha. And this can go to 0 as alpha goes to infinity. So basically, they're saying that whatever network you take, whatever initialization, as long as your sigma bar and beta bar they are reasonable and they are not 0 or something like that-- and now, sigma bar is not 0. So you have some beta bar over sigma bar squared. That might be bad. But you can always rescale, reparameterize it, with a constant in front of it so that this key quantity, beta sigma squared, becomes going to 0. And if this goes to 0, what does it mean? It means that your approximation becomes better and better. And at some point, if you change your alpha large enough, you make this approximation super good, right? So basically, you found the neighborhood such that, in that neighborhood, your approximation is very good if you take alpha to be big. [INAUDIBLE] No, the loss wouldn't change, right? That's a good question. The loss, what is the loss? The loss is something composed with-- composed on top of this network, right? So the loss is L of-- for example, alpha f bar theta x, y, right? So first of all, at initialization, we always try to make the initialization 0, the output at initialization 0. So that wouldn't change. And second, even though seemingly this whole thing is big-- sure, that's true. But we show that you have a global minimum where this B in this neighborhood you have a global minimum. I'm not sure whether that makes sense. So in some sense, I think-- OK, let me try to draw a figure to answer this question. So the question is what happens-- when alpha is big, it sounds like function value becomes big, right? So that's true. But I think what happens is that, for example, suppose you have-- not sure. So how do I visualize this? I think your loss will be-- so if you stretch alpha, your loss will be sharper. So if you look at everything, you look at dependency on alpha, so if you make alpha bigger, you make this neighborhood smaller, right? So you make the neighborhood smaller. So you're going to get something like this, very sharp in the neighborhood. So if alpha is bigger, actually you can find even something that is very close by to make the-- so you have to even move even less from initialization. That's because, if you do a little bit of work, then you actually already kind of already fit the data. I'm not sure whether that makes sense. OK, so there's always one thing which is useful, which is, the f theta 0 x, this is 0. So basically, you always start with this where you don't have any scale, right? So this is just literally 0. And if alpha is big, then this is still 0, right? But when alpha is big, you are more sensitive to theta, right? So that's why, if you change a little bit, then you can already fit your data. So you only have to change very, very little from the theta 0 to fit your data. And when you change very little, then actually your approximation is very good in that neighborhood. I'm not sure whether that makes some sense, but maybe you can discuss. It's a little bit confusing. I agree, right? It's just really because the only thing that happens here is how does this beta and sigma-- the relative difference between beta and sigma, how does that depend on alpha, right? So in some sense, if you have larger alpha, you need to have smaller neighborhood. But the approximation errors scales faster because your function is kind of much kind of more nonsmooth, right? So your function becomes sharper. But actually, the neighborhood shrinks faster than the sharpness grows. So that's why it's working. Yeah. I hope that somewhat answers the question, right? But generally, this is kind of somewhat kind of confusing. And there's another case where we can also see this. So the other case is if you overparameterize. So here, let's say, suppose-- this is actually the original first few papers which invites the NTK approach take. So basically, what you do is you say you have a model y hat, which is equal to 1 over square m times sum of ai sigma wi transpose x. This is a two-layer network with m neurals. And I'm scaling this just in some sense mostly for convenience. Because whatever scale you do, you can also change other scales to compensate. And the convenience come from that, if I choose everything on order of 1, then this will output something on the order of 1, which you will see. But maybe let's discuss that in a moment after I introduce the notation. So I'm going to have this matrix w, which contains all the rows. And W is in m by d. And sigma is ReLU here. I guess, well, maybe let's not say sigma squared. Sigma is something like it's 1-Lipschitz. And it has second order derivative. Second order derivative. Actually, yeah. So you wouldn't see how those come into play explicitly. They're not super important. And what is initialization? This ai 0-- so actually ai is initialized to be plus 1 minus 1 initially and not optimized at all. So they are not even parameters, technically speaking. And wi is a parameter. w0 is initialized from Gaussian. A d-dimensional Gaussian with spherical co-variance. And let's say x has the norm norm of x is on the order of 1. It's on the order of 1. This is just for convenience, so that we have a fixed scaling. And let's say theta-- so the parameter theta is really just a vector version. d times m is just really a vectorized version of w. So vectorized version of w, OK? And so we'll assume m goes to infinity. So m is eventually technically poly, and then d. So a and d are considered to be fixed, and m is something that will become bigger and bigger. And that's the power. So everything comes from the scaling of m. So I guess, just to explain why we want to have this 1 over square root m and a initialization scale like this, so scaling-- and I think the reason, at least one reason, is that, if you look at this, so sigma wi 0 transpose x, this is on order of 1. Because wi is a spherical Gaussian and x has norm 1, a spherical Gaussian times a norm 1 thing will have expectations that will roughly be on order of 1. And then you take some value or kind of something like value sigmoid, then you are going to be on order of 1. And then the sum of this will be on order of square root m, right, because you have m of these things that are somewhat plus 1 minus 1. And because ai is plus 1 minus 1. So you cancel them in some sense, and then you get square root of m. And that means f theta 0x is on the order of 1 because you'd have another 1 over squared m in front. So that's one of the reason why you choose this scaling, OK? So initially, our output is on the order of 1. And now, let's see how does this sigma and beta depends on all of these quantities. So we hope that this key quantity beta over sigma squared to go to 0 as m goes to infinity. So let's first look at a sigma. Sigma is the sigma min of this feature matrix phi. And this is also the same as the sigma min of this phi phi transpose. This is just equality because phi phi transpose, the spectrum, is just the square root of the spectrum of phi. And what is phi phi transpose? Phi phi transpose is basically this empirical kernel matrix, right? The ij, essentially, is just the inner product between two features of two examples. And let's look at what the scaling of this phi phi transpose. So to do that, you have to look at what's the gradient. So let's look at the gradient. So f theta, if you look at the derivative of the output with respect to each of these wi, then you can use chain rule and then you can get something like this times x. So this is the gradient of every neural wi, every vector wi. And that means that, if you look at the gradient, the entire gradient, all the gradient of all the vectors if you look at the norm, then it's 1 over m times the sum over m of the i transpose x times x 2-norm square, which is 1 over m times the 2-norm of x squared times-- and what is this? It's kind of hard to know exactly what is this, but I think you mostly care about what's the dependency on m, right? So what's the dependency on m? Then this, as m goes to infinity by concentration-- so as m goes to infinity, this is really just converging to expectation because this is empirical sum. This is a 1 over m here, right? So sigma prime w transpose x square where w is from the spherical Gaussian times the 2-norm of x square, which is 1 basically, right? And this whole thing will not depend on-- this whole thing will be something like O of 1. So I guess, to see it's O 1 maybe it's some somewhat tricky, but at least you know that this is not depending on m. So m is not in this equation. So basically, this is saying that every quantity here, as m going to infinity, the norm of this is on order of 1, doesn't change that. m goes to infinity. And also, you can do the same thing for the inner product of 2, for example. And the same thing happens that, if you look at the inner product, it's something like this, I transpose-- so this is, I think, technically there should be a 0 here. That is the initialization, prime. And as m goes to infinity, by concentration, this is concentrated around the expectation of it. The expectation is something like sigma prime wi transpose x sigma prime wi transpose x. This I can write the following, w transpose x sigma prime w transpose x prime times x and x prime, where w is from the spherical Gaussian, right. So again, this does not depend on m, OK? So basically, this is saying that this entire matrix phi phi transpose goes to some kind of a constant matrix as m goes to infinity. And I think, this matrix, sometimes people call it K infinity. And this is the neural tangent kernel with m equals to infinity. So this is the fixed matrix. And you can show that this is a matrix that at least is a full rank. So I'm going to skip this part. So it can be shown that this K infinity is full rank. And let's take sigma min to be the sigma min of K infinity, which is larger than 0. Then, basically, you can show that the phi phi transpose for the sigma min of phi phi transpose-- sorry, phi phi. This is larger than, for example, 1/2 times sigma min, if m is sufficiently big, just because phi phi transpose is converging to the constant matrix K infinity. So if m is sufficiently big, then your eigenvalues should also converge. This value, again, is not-- I didn't do it exactly rigorously. But you can expect that, when you converge to some matrix, your eigenvalue, your spectrum should also converge to that matrix. So with all of this, so basically this is saying that your sigma is not-- this is our sigma, right? The sigma is not changing, in some sense, as m goes to infinity. But let's see what beta changes. So now-- how beta changes as m goes to infinity. We will show that the beta goes to 0 as m goes to infinity so that beta over sigma squared, the key quantity, will go to 0. And let's see how much time I have. OK. So now, what we do is that we want to look at the Lipschitzness of beta, which means that you care about these two things, the difference between these two things. And we have computed what the gradient is. The gradient-- both of these are matrices because theta is a matrix, right? And the gradient of each column or each row is something like this. So this is really a matrix with entries times x where i is from 1 to m. So you have each of these is a gradient. So that's why, if you look at the norm between these two, if you look at the Euclidean norm, then it's the sum of the norms of each of the components. So you get 1 over m, which come from this 1 over squared m. And then you look at a norm of each of these components. This is a scalar. This is a vector. So you get x 2-norm times the scalar sigma prime x minus sigma prime wi prime transpose x squared. And then so suppose you want to get-- let's try to get rid of this sigma prime. So let's say this is less than 1 over m times-- just without the sigma prime. And this is you're assuming that sigma prime is 1-Lipschitz O of 1-Lipschitz. Let's put a big O here. And then, of course, this doesn't work for ReLU. As I said, for ReLU, we have to fix it in some way. OK. And then you say that this is you get rid of the x again. So m times sum over i m. I guess the norm of x is 1, as we claimed. And this one we just use Cauchy-Schwarz. Let's say wi minus wi prime 2-norm squared times x 2-norm squared. That's this part. And x 2-norm squared is also 1, so we can just do this. And then this is 1 over m times the distance between theta and theta prime in Euclidean distance. So this is saying that the Lipschitzness is 1 over m. Oh, I guess the Lipschitz is 1 over square root m because we didn't take the square root, right? So x 2-norm is less than 1 over square root m 2. So beta is 1 over square root m. And now, if we look at this key quantity, beta over sigma squared, then this is equals to 1 over square root m over sigma. Sigma is something like sigma bar squared. Sorry, sigma is this, something that doesn't depend on m, right, so sigma min square, right? So this will go to 0 as m goes to infinity. So here I think the radius you need is always the same because sigma is always the same. But your function becomes more and more smooth. Your gradient becomes more and more Lipschitz as you have more and more neurons. So that's why eventually, as you have more neurons, you can get into this regime. Let's see. OK, so, I guess, let me take the next 10 minutes to discuss the outline of the next steps. So any questions so far? So now, suppose I try to establish 3. So recall that 3 is about optimizing g, and optimizing over f are similar. So you can basically do two things. There are a lot of different ways to analyze this. And all the analysis kind of, I think, probably you can think of as two steps implicitly even though the first step probably don't have to write in the paper. But I'm pretty sure many people do that when they derive the analysis. So you first step, it sounds reasonable to say that you first analyze optimization of L hat g theta. And the second step is that you somehow analyze optimization of L hat f theta by somewhat reusing proofs in A in some way. Of course, you cannot re-use exactly, but you can probably re-use most of the ideas. And your intuition is that these two things are similar, so somehow you can reuse the proof to do the actual optimization for the neural artwork f theta. And there are two ways for A. I think, essentially, you can say two ways. Maybe there is a possibility that I missed some of the existing papers. But roughly speaking, there are two ways for A. And, therefore, there are two ways for B in some sense. So the first way, let's say i, is that you leverage the strong convexity of this L hat g theta, and then show exponential convergence. I have to say that the definition of strong convexity, I'm not sure whether I have really given it in this course. This is a stronger notion of convexity if you haven't heard of it. You probably don't. It's not super essential for this course. But if you have heard of it, you know what kind of things I'm talking about. Because this analyzing A, this is analyzing how do you optimize a convex function. It does require a little bit of optimization background. At least on a conceptual level, you can imagine there are many different ways to analyze all optimizations for regression. So strong convexity is the stronger version of convexity. And you can somewhat use that to get the very fast convergence rate. Exponential means, every time, you decay the error by a constant factor so that you get exponential decay of the errors. And another way to do this is that you don't use the strong convexity because sometimes you actually don't have the strong convexity in certain cases. So you don't use the strong convexity, but only use the smoothness. The smoothness means that you have a bounded second order derivative. And again, if you have taken some courses about optimization, then this would make a lot of sense probably because there are different ways to analyze optimization. Sometimes you only have smoothness. You have a different kind of analysis. And based on these two approaches, you can get two different proofs for B as well. And we're only going to talk about A. So we only talk about A-- sorry, talk about i, the first approach. And for this approach, no prior knowledge is required. You probably wouldn't understand exactly what I'm saying about this conceptual thing, but the actual proof doesn't require prior knowledge. And it's actually also pretty intuitive by itself as well. So I think we are going to talk about the approach, the concrete analysis, next week, next lecture. But before ending this lecture, let me make another remark, which I think is useful. And in some sense, it's useful for two, for the second approach, more. But it's also useful for the first approach. So this is an interesting observation, or maybe intuition you can say, and particularly useful for two. So this is saying at any theta t. Suppose you take this Taylor expansion with reference point theta t. So now, we are not taking Taylor expansion at theta 0. We are taking Taylor expansion at theta t. You can define this g t of theta x is a function of theta. And it Taylor expanded at theta t, so the reference point is theta t. And they have gradient f theta t x times theta minus theta t. So this is the linear function. And then you can consider nabla L f theta at theta t, right? So this is the gradient that you actually-- This is the gradient you are taking. Because what you really care about is optimizing f, right? So this is the gradient you are taking. But actually, it's the same as the gradient of this Taylor expansion at a same point, theta t. So these two thing-- there's two t here. This is theta t. And this t is indicating that this is also Taylor expansion at the reference point theta t. So while this is the case, I guess, if you want, you can take the derivative and you can verify it. But fundamentally, this is actually-- it's really just saying that, f theta t, f theta and g theta t agree up to first order at theta t. This is by Taylor expansion. If they agree up to first order at theta t, then anything that's-- so this implies L all of f theta and L of g theta t also agree up to a first order at theta t. So that's why-- so what does this really mean? This really means that gradient descent on f, on this function or maybe technically on L hat f theta, you are taking gradient only with respect to the f. This is the same as taking online gradient descent. I guess I haven't defined online gradient descent, but let me define that in a moment after I write down-- on a sequence of changing objective L g theta 0 up to L g theta t. So what does online gradient descent really mean? It just really means that every time you take the gradient of the new function-- you have a sequence of functions. And every time you get a new function and you take the gradient of that function, you take a one step. So that's online gradient descent. So basically, you are saying that taking gradient descent on this fixed function L hat is the same as taking gradients updates with respect to a sequence of changing functions. And this is actually how the second step, the second approach, really works. So this means that you can use online learning approach. I guess, in this culture, I'm not planning to talk about online learning. But online learning is trying to deal with the case where we have a changing sequence of changing functions. So you are not optimizing a single function. You have a changing-- changing distribution, or changing environment, or changing loss function, whatsoever. So there is a rich literature on how do you analyze optimization when you have a sequence of changing loss functions. And this is exactly what this is about. You are having a sequence of changing loss functions. And if you analyze that, you can analyze the original cases. Now, here there are also spectral structures about these loss functions because they are all somewhat similar to each other, right? So they are all Taylor expansions with respect to reference points that are in a small region. So you can also leverage additional information about that. Yeah, so this is chapter 10 in the lecture notes. But I think, in this quarter, I just don't think we have time to go there. OK, I think I'm already 5 minutes late. And next lecture, we are going to talk about the approach one, which is more self-contained and also kind of cleaner to some extent. OK, maybe just a last comment-- I think there are many different neural tangent kernel papers. I probably am not super comprehensive, but I think most of them basically is a combination of these several things. So one thing is that you have to optimize this, establish this third step of optimization. And you have two ways, two large ways, and maybe some even subtle differences, underlying differences. And also, you have to establish the first two properties. And those are properties not about optimization. They are about your parameterization of your function class or initialization, right? So there, you can also have a bunch of different flexibilities. You can change the reference, the scaling. You can change the width. You can do many different things. Or you can even change, for example, the architecture in certain cases to make it more efficient or less efficient in certain cases. Yeah. So I'm not-- I don't want to have a very comprehensive discussion of this NTK just because there are so many limitations. But I think it's a useful thing to know given that there are so many works in it. And there are, indeed, some nice ideas there. OK, cool. So I guess I'll continue on next Wednesday. Thanks. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_15_Implicit_regularization_effect_of_initialization.txt | OK, let's get started. I guess everything's working now. OK, cool. So last time we talked about the-- we started talking about this so-called implicit regularization effect of the optimizers, and last time we discussed the very basic one, which is that if you use initialization zero and then you see gradient descent and you have a regression problem, a linear regression problem, then what you get is that you get the minimum norm solution. This is from last time, and today we're going to talk about a case where we have nonlinear models. And we'll see similar phenomena, but we're going to have a somewhat different proof. Right. So OK, so I guess so let's dive into the details. So this is the nonlinear model that we-- you know, you will see that this model is nonlinear, but it's not actually that much different from linear model you-- as you will see. There is a paper that can do a little bit more than this, but generally we don't know how to deal with very complex models like deep networks. So this is the nonlinear model we're going to consider. So suppose you have, beta is the parameter and x is the input. And the model is fx is equal to the inner product between beta times O-dot beta and x. So O dot is the Hadamard product, meaning the entry-wise product. So, basically you entry-wise square the parameter, and then you take the inner product with x. So this is still linear in x, but it's not linear in beta. OK, so in terms of-- and still the loss function will be nonconvex because it's nonlinear in beta. And you do the loss function, you take the square, then it becomes nonconvex. So it's not that interesting in terms of the model itself because any way you are doing a linear model, but from the algorithm-- the implicit regularization effect perspective, it's still interesting because you have a nonconvex objective function. And we're going to make this even more interesting by considering a special case where, suppose you have ground truth is that y is equal to beta star O-dot beta star times x, where beta star is r-sparse. So the reason why we want beta star to be r-sparse is that r-sparse means that the 0 norm of beta is less than r, that you only have r nonzero entries. And the reason why we want to have this restriction on beta star is because we want to consider overparameterized models. If you have overparameterized models, meaning-- so we consider the case where n is smaller than d. When n is smaller than d, if beta is fully general, beta star is fully general, then there's no way you can hope to learn anything from less than dimensionality number of data points. So, basically we make sure beta star is sparse, and so we assume-- we're going to assume that n is smaller than d but n is larger than some poly r. That's the setting where we are going to work with. And more specific and for simplicity, with all those generality let's assume also beta star is larger than zero entry-wise, because you can see that the sign of beta star doesn't really matter in terms of the functionality, in terms of the ground truth model. And actually for simplicity of this lecture, let's also assume that beta is just some indicator of some subset of coordinates, where S is a subset of coordinates and the size of S is equal to r. This is only for simplicity of this lecture. OK, and now let's define our data. So I guess we have talked about that we are going to have overparameterized model. So we have n data points. And n is less than d, so these n data points are denoted by x1 up to xn, as they are iid from Gaussian of dimension d-- of spherical covariance. And yi is generated from this model without any error, so the yi is inner product, beta star times beta star-- is this inner product of square of beta star times xi. So-- and n is much, much less than d, but we'll assume that n is bigger than omega tilde of r squared. So n is roughly bigger than r squared. And this, so this amount of data points in principle is-- allows us to recover beta star. Actually, you only need omega of r to recover beta star if you count dimensionality, right, so they're r degree of freedom, approximately. So you only need n to be larger than omega r, but for the theory to work, we have to allow it to be larger than r squared. But still, if r is very small, then you still can make n much, much smaller than d and still bigger than r squared. Let's say r is a constant, OK? That's probably the right way to think about it. Any polynomial dependency on r is fine, and so that n is just something like a big constant. But n could be much less than d. OK, so-- and maybe so far, after we define this you may wonder why we are-- why we have to use this nonlinear model, right? The answer is, no, you don't have to use it if you really want to solve the problem, so the nonlinear model is only introduced to study this effect, right? Suppose you really care about solving the question, then you can use the classical solution, which is called lasso. So-- or in the more kind of, like, terms-- using terms that we used in this lecture, you can use L1 regularization. So, basically-- so to leverage sparsity, I think-- I'm not sure whether you all have this background, but typically people use L1 to, in some sense, encourage sparse vectors. I'm not going to get into detail there, but you can show that if you minimize the L1 norm of theta of the model, then you can reconstruct sparse vectors. So in particular, I think, suppose you have this model f-theta x, which is a linear x. Then this is so-called lasso. This is the L1 regression objective, which is something like this plus lambda times the L1 norm of theta. So-- and the classical version of the theory, I'm not going to go into detail here. In some sense this is-- you know, if you don't know the background, you probably just somewhat memorized it or kind of like treated it as a fact. So the classical theory says that if n is larger than r, say-- I think you need a pair of logarithmic factors here. If n is larger than r, then this objective function recovers the ground truth, right? So objective above recovers the ground truth, theta star. Theta star, which is the-- you can-- I guess you probably already see that theta of theta corresponds to beta up to this square, squaring the thing, so approximately. So, basically, if you just really care about solving this question, you view this as a linear model-- you don't have to care about the quadratic thing-- and then use L1 regularization to recover sparse vector. There's a rich-- you know, a lot of existing kind of theory about this. I'm not going into details, but this is something-- somewhat believable because you are using the sparsity of the vector. And also, another thing to note is that because beta and theta-- the relationship between beta and theta is that theta corresponds to beta O-dot beta, right, the entry-wise square, so-- and then the 1 norm of theta is equals the 2-norm square of beta. The 1 norm of theta is the sum of entries of theta, which is equals the sum of beta-i squares, which is the 2-norm square of beta. So, basically, if you do the quadratic one, you can-- you should regularize L2 norm, right? So, basically this objective 1 corresponds to L2-regularized objective. So if you really want to use the quadratic one, you should do-- the quadratic parameterization, you should do this f-beta x-i square plus L lambda 2-norm beta square-- beta 2-norm square, right? This is the objective if we select beta, right? So in the beta space, you should regularize L2-norm square. In the theta space, you should regularize L1, right? So this is the classical solution, and now when I talk about implicit regularization, I think our goal is essentially, basically saying that if you use small initialization, this is, you know, without regularization, without explicit regularization-- this is basically doing the same thing as, let's call this, 2. So as long as you use small initialization with beta parameterization, you automatically get this L2-norm regularization, for free to some extent. This is not exactly the exact way to phrase-- to state a theorem, but this is the rough-- basically the main idea. So more concretely, where we are interested is the objective L-hat beta. Let's formally define it. I think I normalized by 4 here just because it makes the gradient look cleaner, so I-- but it's just a constant vector. So 1 over 4n times the square loss, the min square error, right? No regularization, right? So this is our objective, and we will-- we are going to do that the optimizer will be that you do GD on L-hat beta with small initialization. And more concretely, so the algorithm is for some very small alpha larger than zero. We initialize beta to be alpha times L1 vector. So you don't know the support of beta, of course, so you need to initialize all the entries by alpha. And then you take a gradient descent update every time, so you say the beta-t plus 1 is equal to beta-t minus beta times the gradient at beta-t. OK, so this is the objective we are going to study, and we will claim that this objective-- sorry, this is the optimizer we're going to study. We're going to claim that this optimizer actually finds the beta star even though there's no explicit regularization. Any questions so far? So here is the theorem. So the theorem is that-- basically, the shorter version of the theorem is that when n is-- so n is omega squared. We can converge to this algorithm, converges to beta star. And let's-- but I think there is a little bit kind of like with small alpha. But there is a little bit of detail, so let me state the main theorem. So I guess, suppose n is bigger than big O r-squared log-squared d, so alpha-- let me see. Maybe let's write in this way, I think, just to avoid confusion. So let c be a sufficiently large constant. Suppose n is bigger than 3 times r squared times log-squared d. I think the dependency of the logarithmic factor is suboptimal. The dependency on the r probably is also suboptimal. And let alpha to be less than some inverse poly and to the c, then when the time t-- t is total number of steps-- is less than 1 over eta times square root d alpha and bigger than log d over alpha over eta, so for this range of time steps, we have-- you can recover beta O-dot beta in L2 norm with our alpha times square root d. OK, so how do we interpret this? So I guess there are a few remarks for interpretations. So the first thing is that-- I guess this is something probably I should have mentioned early, so L-hat beta has many global mins. And why? This is because of overparameterization, because you have-- like, if you count a degree of freedom, you have n data points and d parameters, right? So you have more degree of freedom than number of constraints, so you have many, many minimums, right? So that's one of the reasons why you have implicit bias. If you only have one global minimum, there's no way you can have implicit bias. And second thing, how do we interpret all of these quantites in a bound? So the runtime lower bound depends only on the logarithmic of alpha. So this means that you can choose alpha to be anything inverse polynomial. So alpha can be inverse poly, right? You can choose basically the constancy to be as-- to be a constant, so then the runtime wouldn't be affected too much. And the error depends on alpha. So, basically, if you want a very, very small error, inverse-poly error, you can just take alpha to be inverse poly, and then our runtime is not changed too much. And there's an upper bound on runtime, which means that you need to do a early stopping or come to this bound. So if you really believe in this, you have to do early stopping, but the early stopping is pretty mild, but pretty mild because you can see that the upper bound actually depends on inverse alpha. So if you take something like alpha to be 1 over d to the power 10, then your upper bound is pretty relaxed. You can run stuff for a long time, right? So and you-- and actually, in practice we never observed that you have to. If you really do the-- this synthetic example, really run experiments, you never have to early-stop. And that probably-- I don't believe that you have to early-stop. This is more or less an artifact of the proof. But this artifact is not too restricted anyways because it depends on alpha, on the inverse of the alpha, so you can take alpha to be small to make the bound very relaxed. So we didn't pay the attention to remove that completely even though we believe that it's possible. Anyway, so basically, the right way to use this is that you take alpha to be something super small, and then your error is very small and your runtime is-- your runtime lower bound is the logarithmic in alpha. OK so there is one small thing, is that alpha cannot be zero, so why you don't create alpha to be zero? The only reason why alpha cannot be zero is because 0, beta is 0, is a saddle point, so the nabla L-hat beta, it-- at 0 it's 0. This is the part that comes from the quadratic parameterization, right, so the-- we'll compute the gradient. You will see that because you have the quadratic parametrization, everything, the gradient's always multiplied by beta itself. So if beta is 0, then you just have 0 gradient. So if the gradient's 0 and you don't-- if you-- and we are analyzing gradient descent, no noise, nothing, right? So null 1 is stochasticity, so if you started at 0, you would just stay there forever. That's why you cannot use 0. That's in transition, but anything close to 0 is fine. In some sense, this log 1 over alpha that you'll pay, this log 1 over alpha is what-- how much time you have to pay to leave the saddle point. So-- and leaving the saddle point is actually very fast. That's-- in some sense you can kind of believe it, right, because you have a saddle point-- how do I draw it? Like something like this, right? So leaving it is kind of like you have-- it's optimizing a concave function, and you are going downhill, right? You basically accelerate so fast that eventually you leave it very quickly. So OK, cool. So right, and in some sense you can interpret this as, a gradient descent is preferring, prefers minimum norm solution in L2. So maybe preferring-- sorry, I should-- like, preferring global minimum closest to initialization, all right, because we have kind of claimed that you are-- so actually here, I think we have somewhat alluded to this, but just to be formal in this case, you can prove the following. So you can prove that beta star is actually the argmin of the norm, with the constraint that you'll fit the data. Suppose you try to find a global minimum with the minimum L2 norm, right, so this is the constraint, right? So this means you have a global minimum if it satisfies everything and you minimized the 2 norm, then this is actually equal to beta star. And the reason why this is true is kind of similar to why the L1 norm works, right, just because the minimum-- the 2 norm of beta is the same as the 1 norm of theta. And this one, if we replace this with theta, then this is true. And this part is by the standard theory, which I didn't show, but you know that if you look at all the linear models that fit the data and you're looking at sparsest one, it's going to be the theta. Actually, technically, I think this should be argmin to the O-dot 2 because-- sorry-- so the square root because there is-- there's a translation. But the objective is the same, but the argmin, it has the translation. I'm not sure what that makes. That's right, so like-- so if you-- so maybe I-- let's see. Maybe the easiest way to write this is the following. So these two are exactly the same. This is because you have a translation if you look at the min. All right, and for the first objective, the argmin is beta star, and then you can somehow see that the argmin also transfers just by taking the square root. So-- and this is also the case for linear regression, right? Recall that we also proved that if you start with gradient descent with zero and you do linear regression, you get the minimum norm solution that fits the data. So it's very similar, at least from the-- on the surface, from the formula. Like, you have almost the same guarantee. But I don't necessarily believe that this is always the case for all the-- like, I don't feel like you always find the minimum norm solution, the solution that is closest to initialization that fits the data. I don't think this is always true. It's probably-- I think there's still something special about these examples. You know, we cannot just extrapolate generically. OK, so now we are going to try to prove this. Any questions so far? The proof of this theorem is pretty complicated. I will try to finish it in one lecture, but if we cannot, I think I'm going to refer you to the notes. The notes has a pretty detailed derivation. So to kind of get some preparation, let's try to understand some basic stuff about this loss function. So first of all, let's look at the population risk. This is the population that we had. The population risk is y minus beta O-dot beta times x square. Right, so-- and you can try to get rid of the expectation because this is population. So what you'd do is, you'd plug in a definition of y, so you'd get beta star O-dot beta star minus beta O-dot beta times x square. And then this gives you-- I think I have 1/4 here. That's my population, right, because I know I have an additional 1/4 everywhere, so then this becomes 1/4 times the norm-- the difference in norm. This is just because the expectation of some vector times x squared, if x is Gaussian, this is equal to a norm of v, 2 norm, square of that. So-- and I'm going to claim the following. So you are going to have uniform convergence for sparse beta, so-- but we don't have uniform convergence over the entire space, right, because we have overparameterization. There's the-- if you have uniform convergence for everything, then they wouldn't have an impressive regularization effect. So that would be kind of the classical theory that we discussed in the first part of the course, right? So but we claim that if you look at sparse beta, then you have uniform convergence. So here is the-- I'm going to build towards this. So first there's a claim which is, with high probability over the choice of data-- so for-- if n is bigger than something like O tilde of r over delta square, then for every v such that with-- so support of v is less than r-- oh, I guess, alternatively we can write the 0 norm of v is less than r-- then you have the following, so the empirical average of this kind of thing, right? So why do we care about this? I guess this is-- probably can be seen here. All right, so this is-- the population has this form, something like v dot x squared, and you take this expectation. And this is the empirical one. I'm going to be more explicit in a moment, but this is kind of like a small tool. So we are saying that if you have this empirical version of this v dot x squared and it's going to be very close to the population version-- the population is just the 2 norm of v squared, right, so it's going to be very close to the population, but only for v that is sparse. So this concentration only works-- so if you have enough n's, right, if n is infinite or close to infinite, then you should expect this to work for every v just because this is the law of large number or, like, concentration inequalities, right? But here the concentration inequality is more subtle or more-- kind of like there's a finesse here because you only care about v's that are sparse, and also you only have this many of examples. You don't have a lot of examples. n is not bigger than d, even. n is only bigger than the sparsity of d. So this is-- and actually this is also called so-- and also just for the language, I guess this is something useful to know. You know, we don't really have the-- depend on these kind of properties, but we say that if a vector satisfies this condition-- suppose I call it 3, so suppose xi's satisfy 3. Then we call-- then we say this satisfies the RIP condition. So, basically a 3 is called r,delta-RIP condition. The acronym is a little bit kind of weird, but I think there-- it's standing for Restricted Isometry condition. The reason why it's called restricted is because you are only restricting to vectors v, right, but if you are not restricting to vector v, then this is kind of like isometry condition because you are basically saying that all the xi's are isometric, right? They are kind of spreading the entire-- all the directions equally, right? The xi's have converged that energy. That's pretty much what you are saying. Right, so if you don't-- if you require this for every v, then you are saying that the covariance of-- so what this is really saying, this equation 3 is really just equivalent to sum of xi xi-transpose times v transpose times v, is-- right, this is bounded by v-transpose I times v times 1 plus delta, larger than 1 minus delta times v-transpose I times v. Right, so if you require this for every v, so "suppose require for every v," then what this is saying is that the sum of xi xi-transpose 1 over n is, in PSD sense-- how do I write this? Wait, how do I write in PS-- wait. Oh, OK. Right, we'll put that kind of notation for PSD, yeah, like this, less than 1 plus delta times an entity and larger than 1 minus delta times an entity. So if you require it for every v, you are basically saying the covariance of xi are iso-- I think it's called-- not called isometric. It's called iso-- isometric? Isoparametric-- it's just covariance itself, so entity, right? That's-- I'm blanking on the words. So, basically you are saying the covariance close to an entity. But you are not requiring it for every v, right, and then also this is not true for if you don't have enough data, right? So we only have n is less than d data points, so in our case, this matrix is not even full rank. How come you can expect a full-- this is-- this only has rank r because-- it only has rank n because we only have n data points and it's less than d, so it's not even full-rank matrix. How come this can be close to an entity? There's no way, right? But if we look at the quadratic form, right, so if you look at the quadratic form and you only look at the quadratic form evaluated on sparse vector v, then this matrix becomes effected to look like an entity. That's basically what this condition is saying. Right, OK, so and once you have this lemma, or this claim, then we know that you have the uniform convergence for beta sparse, so a sparse beta. And this is just because L-hat beta is this 1 times 4 times 1 over n times sum of beta O-dot beta minus beta star O-dot beta star times xi square. And this is of this form, right? So you can treat this as v, and then you are in this form, the v dot xi square, right? And this v is sparse if beta is sparse and beta stars are sparse. Beta star is already sparse. That's our assumption. And if beta is sparse, then this thing is also sparse. You pay 2 r-sparse, right? They were r-sparse, and now this whole thing would be probably 2 r-sparse, at most. So then this means that this is close to the norm, right, so 1 times 4 times the norm of this. And this is equal to L beta. Right, so for sparse beta, you have uniform convergence, but you don't have uniform convergence over the entire space. So in some sense-- so and also you can have uniform convergence for the gradient of this if you really care about it. I guess-- I think I will show this later for sparse. Right, so you can even show the gradient concentrates around the expected gradient. The empirical gradient concentrates around the population gradient for sparse beta. So however, on the other hand, there exists dense theta. Such that, for example, L hat beta is 0. But L beta is very much lower than 0. So they are overweighting positions. There are places where you don't have the proper training and test loss are similar. So but those are dense beta. OK. So the question is, why you are finding a sparse one but not a dense one, right? Because the dense one doesn't have the nice property. So the main intuition is the following. So we have done quite some preparation. So the main intuition or what we believe to be happening is that following. So you can think of this maybe different Xr to be the site of vectors that are sparse. So beta such that beta r sparse. Let's see whether I used the-- so and-- so supposed you look at a space. So you have an entire space which is probably something very large. And 0 is somewhere here. It's the origin. And you have some family of, let's call this, Xr. This is the family of sparse vectors. And you know in this Xr, everything behaves so nicely. The training and tests are just basically the same up to some small error, right? So the training on test-- and also in terms of gradients, they are similar. The gradient of L hat and gradient of L are similar. And I think basically what happens is that you start from somewhere close to 0. And the reason you cannot start at 0 just because the setup point, not very important. And you can think of a gradient descent. So you are doing gradient descent on the empirical loss, L hat beta. And because you have uniform convergence, you're basically doing the same thing as gradient descent on L beta as long as you don't leave this at Xr, right? So if you leave it, there is all bets are off. But if you don't leave it, it's fine, right? So it turns out that what happens is that if you do gradient descent, you can consider the alternative world where you do gradient descent on a population. So let's say this is the gradient descent on a population loss L beta. And it turns out, if you do gradient descent on a population, you are going to reach a point, which is beta star, which is on the boundary of this set. And also in this trajectory, you never leave this set Xr. So now, because we believe that the black trajectory is similar to the purple trajectory as long as they are in the set Xr. And the purple trajectory never leaves the set Xr. So that's why the black trajectory also converts to beta star. I'm not sure if that makes sense. So basically, the purple one is the population trajectory. And the black one is the empirical trajectory. So you know that the empirical trajectory and the population trajectory are similar in the set Xr. You don't know anything about outside world, right? And also, the purple trajectory never leave the set Xr. Then the black one probably shouldn't leave as well. And the black one should be similar to the purple one. So for example, suppose the purple trajectory looks like this. Suppose that's what's happening. Then you lose control. Because at the beginning, you are following the proper structure. And then you leave the set. And then all bets are off. You don't have any control anymore. But this turns out to be not what's happening. What's happening is that the purple one actually stays in the set Xr for a long time until it reaches beta star. And then, it's stay at beta star. So that's why this alternative situation doesn't happen. This doesn't happen. This is not what's happening. And inside this Xr, everything behave nicely. There's only a global minimum, which is beta star. There's nothing else. And all set Xr, there are a bunch of different things. So all set Xr you can imagine that there is a-- let's do a different color. So outside Xr there is probably quite a bunch of overfitting solutions. So these are all solutions that makes your protocols 0. There are so many of these solutions. But you never get to actually even go to those places just because your black trajectory is emitting the purple one. And the purple one didn't go to those places. And the black one doesn't go to those places either, right? So that's the intuition why this is working. Any questions? [INAUDIBLE] beta doesn't leave the [INAUDIBLE]?? But why the purple one doesn't leave the-- yeah. So that's not-- I don't have a-- I didn't give a justification either, right? That's something we're going to prove. Yeah. And I don't think this is something about the property of this problem. And if you see the proof is not that surprising. Because you are gradually-- you are in some sense trying to-- it's a local search algorithm, right? So you are trying to search your neighborhood first, right? So you gradually-- you start from 0, somewhere close to 0. You are gradually searching your neighborhood until you find a global minimum. That's why you don't want-- probably wouldn't go this circuitous way. So you're going to go more straight to the closest point. But the real proof has to go through the math, yeah. My other question is the initialization scheme we described before isn't in X bar. It's [INAUDIBLE]? Yeah. That's a great question. So the initialization alpha times 1, right, so this is literally speaking is not in Xr, right, it's this. And I don't ask this question many times. And I think the right way to think about this, I think I have some remarks somewhere else. But you ask me earlier. I should probably should just answer it here. So the question is why the transition is not in Xr? But I think the thing to think about is that, of course, it's not exactly in this sparse set. But it's close. And close in what sense? Close in a sense that alpha 1 is very close to 0. And 0 is in this set. So that's kind of the property we're going to use. So yes, so you are right, that we can never say exactly a set Xr. You are going to say that you are in the neighborhood of Xr with a little bit small error. And the error is very small. It depends on alpha. So that's why we have to choose alpha to be very small. So in some sense, you really want to choose 0. So from all of this discussion, the only thing you want to do is to choose 0. And 0 it just happens to be a saddle point. That's unfortunate. So you have to perturb it a little bit. [INAUDIBLE] The question is whether this particular property has anything to do with the positivity of beta, right? So I don't think so. Because so are you talking about a positive beta start or beta, the variable beta? Beta star. Beta star. Right, so we assume the beta star to be positive. I think, no matter beta star or beta is positive. Sorry. No matter whether beta star is positive, beta star square is always positive. So you can-- if you initialize from this, then you just always go to the positive. It's always keep being positive. So basically you just learn the absolute value of beta star. And if learning the absolute value of beta star is not that different from learning beta star. So basically, suppose you don't restrict beta star to be positive. Then you cannot claim that you recover beta star. You can only say you recover the absolute value of beta star. But the picture, the intuition is still the same after that changes. [INAUDIBLE] Yeah. So the question-- [INAUDIBLE] Yeah. So I guess the question is whether we really have to be exactly alpha times 0,1 vector, right? So and the answer is-- this is a great question-- the answer is no, you don't have to do that. The only thing you have to do is that you initialization beta 0. This is a vector. I think you only need to make sure every entry of it is very small. So you only need to make sure this infinite norm is very small, less than something like alpha. And yeah, and you can even initialize negatively, I think. So if you initialize negatively, then the action will become negative eventually. But the sum doesn't matter that much. So that's why. Yeah, but yeah, so and I'm only doing this just for convenience. Because it makes the proof cleaner. So given this plan, this intuition, so it's natural that we should start analyzing the population trajectory, right, the purple one. So and then we try to say that the black one is close to purple one. So let's start with the population trajectory. So you can sometimes think of this as a warm-up. Or in some sense, this is also a kind of sanity check for this approach, right? So this is-- let me state the theorem formally. But I think you are expecting what the theorem is saying. Where GD on the population loss will converge to beta star. And in, I think, O of log 1 over epsilon alpha over eta iteration with epsilon error in L2 distance, right? So but I guess the formal theorem matters less than the proof. Let's see how the proof goes. The proof is kind of brute force. Because you just really literally control what each of the coordinates is doing. So it's pretty explicit. And you see how the coordinates are changing. But explicitness is actually a weakness in some sense. Because we are doing so explicit derivation, it's great for this problem. But it's harder to be extendable. I think that's a general thing. So if you have a various kind of strong analysis for toy case, then that's not necessarily always the good case. Because if it's too strong, too explicit, then the expandability, the applicability to broader case becomes a problem. And this is, in my opinion, probably the main reason why we cannot extend to more general cases other than this simple quadratic one. There's a small-- there's extension to the matrices case, but not fundamental extension. So you can change all of these to matrices into the vectors, that's still fine. But not beyond that. So but still, anyway, let me do an analysis. So the proof sketch is that you first compute the gradient. We call that L beta. So L beta is equal to 1/4 times beta over the beta minus beta star, over beta star 2 norm square. And you can compute a gradient with respect to beta. It becomes beta over the beta when it's beta star over beta star times O dot beta. I guess you can verify this with pretty much scalars. But the vector version is pretty much taking the sum of the scalar, all the dimensions, right? So here, all the dimensions are separated. So basically, this is just the sum of the objective. And each objective is about one coordinate. And this is just a simple chain rule. And you can see that everything is multiplied by beta always, right? The gradient is always multiplied with beta. And this is why gradient L0 is 0. So and now, let's look at the update. The update will be beta t plus 1 equals to beta t minus eta times this beta t O dot, beta t minus O dot beta star, times O dot beta t. So and this is really-- this is everything is in B-dimension. But really, you can view this as d separate update in d coordinates. Because each coordinates are not doing-- having any kind of correlation with anything else. So this is really just the saying that Bti is equals to Bti minus eta Bti squared minus B star i squared times Bti, OK? So every coordinate are just doing separate things. And maybe it's useful-- but this different coordinates, right, has a little bit differences because this one is different. Otherwise, all the coordinates are doing the same thing. So the target is different. So in some sense, you are basically-- so when i is in a support of beta star, which is denoted to be s, so then your update is basically beta i is update to be beta i minus eta beta i squared minus 1 beta i. And when I guess that moment at t, just for notational simplicity. And if i is not in a support of beta star, then beta i is just update to be beta i minus eta beta i squared cube. So and you can see all of this intuitively makes sense. Because suppose this is the case, then if-- so beta i is supposedly is less than 1 between 0. Let's suppose that's the current beta i. And this number is negative. This is positive. So this whole thing is negative. So you are trying to increase beta i, right? So basically, this update is trying to increase beta i if beta i is not yet reaching 1. And this update is doing the reverse direction, right? So this is trying to say that as long as your beta i is bigger than 0, then you are trying to decrease your beta i. So basically here, this encourages beta i to go to 1, and this encourages beta i to go to 0. And that makes sense, because 1 is the beta 1 star, and 0 is the beta i star in the other case, right? OK. So now, let's try to do a more detailed calculation to see what happens in each of this case. So let's first look at the case one. Let's say this is case one, and this is case two. So case one. So here we are trying to-- the update is trying to increase beta i until it reaches 1. So there are still two separate cases. The first case is that suppose beta i is less than-- at some time t is less than 1/2. So you are only 1/2 done with your work. And then, you can see what's the changes. So beta t plus 1 is equals to beta i t minus eta, beta i t squared minus 1, beta i t. And we argue that this is trying to increase beta i. And we can see how much it increases beta i. It increases beta i by this factor, eta times 1 minus beta i t square. So you have a multiplicative factor. So in some sense, you're multiplying your beta i to make it bigger. But how much you can make it bigger, it depends on the value of beta i itself. But if we know that beta i is not too big, then we can bond this by beta i t times 1 plus eta, 1 minus 1/4, which is bigger than beta, is equal to beta i t 1 plus 3/4 times eta, right? So you have exponential growth. And if beta i is already bigger than half, right, then let's see what happens. So now the growth rate might be slow down. Because if you see if beta i is kind of close to 1, then this constant becomes close to 0. So your growth rate slows down. And that's true. But what you can do is you can analyze how far you are away from 1, from your target. And if you look at how far you are away from 1, then you get the following recursion. Minus eta times 1 minus beta i t square, beta i t. And so, let's try to reorganize this a little bit. So I guess let's say, I think let's assume this. Let's assume this is also less than 1. And then, this is less than-- sorry. Let me think. I guess, I don't necessarily have to assume this. Let's remove this for a moment. This is less than 1 minus beta i t minus eta t squared times 1/2, right? Because beta i t is bigger than 1/2. And now, you can factorize this to get a factor 1 minus beta i t out. And this 1 minus eta, 1 plus beta i t times 1/2. And I guess it might be a little bit you may feel like this is a little bit unnatural. But I think if you see my final target, I guess it's probably actually not super difficult to guess how to do the intermediate steps. So now I'm going to use the fact that beta i t is bigger than 0. So get 1 minus beta i t, 1 minus eta times 1/2. So my point is that if you look at the final outcome, you'll see that now you are not growing exponentially. But you are converging to 1 exponentially fast. So you are decreasing your error-- you are decreasing your distance to 1 in exponential fast rate. So in some sense, this behavior of these dynamics has two regime, right? So when you are small, you are growing very, very fast. And then when it becomes bigger, then your growth rate slows down, but you are converging to 1 exponentially fast. So that's why if you combine these two regime then you are-- and also you can see that this maintains beta I is less than 1, right? So because if you-- before you are less than 1 and later you are going to be also less than 1. So basically, the behavior is that if you summarize-- so basically, in log 1 over alpha over eta iteration, you are in the first regime. So this is beta i t grows to 1/2 exponentially fast. Exponentially. And you only need to use this number of iterations because you-- initially it's alpha. And you want to grow to 1/2. So I guess technically you also have a tool here if you want. So and you have a learning with eta. So basically, this is because 1 over eta is 1/2 to the power of this t1. t1, this is something like at least-- Sorry. This is because this is your growth, the factor of your growth. And you find some power to it. And then you want to grow at least 1/2 over alpha factor. And that's how you solve this number t1. OK, right. But anyway, so this is how-- OK, the first part. And then, in log 1 over epsilon over eta iteration, beta i t converges to 1 minus epsilon. And this is because you want to start from 1/2, the error 1/2 to error epsilon. And each time you decrease by 1 minus eta over 2, so that's why you have to pay this number of iteration. Does this make sense? I guess there is a small thing that I probably escaped in some sense. So this is about how do you derive how many iterations you need. So if you want 1 plus eta to the power t to be bigger than some number R, then this means t needs to be bigger than log R over eta. So this is just something that was burned into my head. But you can derive it yourself as well. OK. Cool. All right. So that's what happens with those coordinates that you want to converge to 1. And you also have case two. And you can do the same thing. I don't necessarily want to bore you with all the derivations. But I guess the derivations here is easy. Because you are just trying to say beta i is decreasing in this feed, right? So and here, let's see. Now here, interestingly, so if you really look at-- literally look at this. This is actually saying that you are decreasing beta i eventually to 0. But somehow we care about something weaker that is more-- let me see. Am I missing a plus here? So I don't know why I'm-- I think I'm not missing a plus. Just one moment. Let me make sure I don't make any mistake here. OK, so I think what I do here, I have some derivation here, some small claim here, which is particularly useful for the empirical case, which I'm not sure I can get into. So I'm going to skip this part. So at least for now it's not trivial to see that beta i is decreasing. If you start with alpha, then you keep being smaller than alpha. That sounds trivial to see. And maybe let's just leave it there. This is enough for us to deal with the population case. So basically, our conclusion is-- you can see that the conclusion is that you converge to something close to 1 in this number of iterations. And the iteration count is something logarithmic times 1 over eta. And you also have this property that you are always less than 1. All the entries are less than 1. And also, the small entries are never growing. So basically, your beta t at any time basically looks like there are a bunch of entries which is growing. So the s and also in the s complement, in the s complement all of these entries is less than alpha forever. And in the s-coordinates, you are growing potentially. So you can see that this is still always approximately R sparse. Because at least you only have our big non-zero entries, and all the other entries are very small. So approximately, this is still always approximately in the Xr, just because the small entries are keep being small. So now, let's try to talk about the empirical case a little bit. I think this full analysis probably wouldn't fit within 15 minutes but I think I can give you some idea about it. And actually, I'm going to only do the case when R is 1. Because when R is more than 1, it's kind of a little bit complicated. So I'm only going to do the R is 1 case. So get that for some delta, and when R is 1, and it's less than omega of 1. So basically, you only have to have logarithm and of examples. And then, gd on beta hat-- this is just a simplification of the theorem we have already stated. I guess now this is also actually weaker. Maybe I should say it's a weakness theorem. Not only weaker in the sense of simplification, but also weaker. So weaker and simplified. So you get this iteration steps. We have-- it's less than O to the form square root of 2. So here, I guess why this is weaker than what we have said before. Before, the error can goes to 0 as long as you take alpha to be small enough. So this is weaker because error doesn't go to 0. So before we can make the error goes to 0 as alpha goes to 0. And now you only prove that it depends on something like the number of examples. This is just a technicality. If you want to prove the case when the error goes to 0, you have to do extra work, which is probably too much for this course. So how do we do this? So in some sense, the proof is trying to-- in some sense maybe let me-- the proof idea in some sense is pretty intuitive. Given that this figure we have to draw. So you are just trying to show that L hat beta is close to L beta. And that's something you can prove very easily. So we try to prove that-- so one you, try to prove that L hat beta is close to L beta for every beta that is-- I guess, technically, I have to say this is approximately sparse. Because you can never be exactly sparse, as we discussed. So that's something that is relatively easy. And the second thing is that you want to say that the beta t are under-- for the empirical case, the empirical trajectory, in the empirical trajectory never leaves this Xr very far-- never leave it significantly. So how do you show two? Basically, you are trying to say that you are staying close to the-- so you basically want an error to not blow up. So what does that mean? So it means that maybe let's draw something here. So you are trying to show two trajectories that are close to each other just forever. So you have a trajectory, which is the purple one. This is the green descent. And what happens is, after you take the first step you have some error here, right? So now, these two trajectories are not doing the same thing anymore. Initially, you are taking the gradient at the same point. And now this purple one is taking gradient at this point. And the black one is taking a gradient at this point. So you have this error in not in terms of the gradient are different but also in terms of the a difference of the points where you are evaluating your gradient. So it's empirical versus population, that's one difference. And the other thing is that you are evaluating the empirical and population gradient at different places. And that could introduce a bigger error. And then it will introduce even bigger error. So if you don't do this carefully, then it's possible that eventually you go this way and the other one goes the other way, just because the error keep blowing up, keep being bigger and bigger. So basically, you have to control how the error control-- how the error changes. So that's the key part. And this boils down to a lot of at least on the surface seemingly very boring calculations. If you really want to do all this calculation well, you have to understand a little bit about what each term means. And it does require some extra work. But the first level thing is that at least this whole thing is a simplification of one of the paper I wrote a few years back. And when we did this thing, the first thing we tried is that we just try to do the calculation. So and you try to understand which term is problematic, which term may cause a bigger blow-up, and then you focus more on that term, and then try to understand a little bit better, and then maybe use some-- devise some inequalities. But basically, below this level it becomes quite technical. So I think I'm going to probably spend another five minutes to do a little bit a thing. So I think the thing for you to control this error is that one thing we realize is useful is that actually this is actually important conceptual-- in a semi-conceptual kind of thing. That we realize, so to control how this error is good to represent your iterate in a convenient way. So what does that mean? So it means that the beta star, we will assume R is 1 already. So Let's assume beta star is just e1 So it's just 1, 0, 0, 0, right? So you just want to say that it converges to this vector. And one of the useful thing we did is that we take beta t to be-- we write beta t to be rt times e1 plus an arrow vector zeta t. So explicitly, you write it is a multiplication of beta star and some error. So in some sense, beta star is here. And you are starting from 0. And you try to say how much you are different from this line. So this is our zeta t. And this is the rt times e1. That's how you represent where you are at time t. And what we did is that we want to say that rt, the plan is that you want to say rt is going to 1. Because eventually you want to go to e1. And zeta t, the error term, is always small. I think we prove it to be smaller than O of alpha for any t. That's the next level, right? So and then, you basically what you have to do, you have to try to derive a recursion for rt and zeta t. And when we remove the recursion for both of these two things you can always keep in mind that what happens with the population, right? So the precursor for rt, and you can have the same recursion for the population case. So basically you're going to have-- so suppose there's, for example, talk about-- let me see which one I can talk about easily. Let's see. How do I quickly simplify these notes? I think I had some backup plan. Yes, here. So basically, for example, if you look at the recursion for rt, it looks like rt plus rt is equal to rt minus eta rt squared minus 1, times t, minus some term that depends on zeta t. So if you do-- I have all of these formulas written here. But I don't want to show all the details. So and if you look at this, this is very similar to the thing that we had before. I guess, I don't know why I'm-- let me also change the superscript. I think I should probably have-- yeah. In my notes it's also superscript. OK. I guess let me not change everything. Just you know the superscript is the same as the here. So if you look at this one, this part, this is the same as the update for the beta we had. So where is the update for beta? Here, right? So this is the case when you are looking at a coordinate where you have a NG1 in a beta star. So and this is the update. So and if you just replace beta i to the rt, you get the same formula. So rt has the same formula. So basically, this part is what the population gradient does. And you already analyzed this part already. So basically, the only thing you have to deal with is, how does the error term affect you. And you inductively show the error is small. So under the assumption that error is small, then you can show that the update for rt is basically doing the same thing as the update for the beta t before. So that's how you deal with rt. But how do we know the zeta t is small? That becomes even more complicated. Because zeta t also has a derivative-- has a recursion, right? So zeta t has a recursion, which is-- I think I don't even see a simple way to write it. Actually, I have something like this. So zeta t is equal to zeta t minus something like some matrix mt times zeta t, some vector, sorry. Some vector low t times zeta t, something like this. I'm not going to define low t. And what you do is that this is somewhat similar to the case to those i not in s, the beta i t recursion. So it's not-- so what's the beta i t recursion? The beta i t recursion was something like beta i t plus 1 is equals to beta i t and it's eta beta i t cubed. So I think this, if you really look at the derivation, I think this is something like beta i t squared minus 0 times beta i t. So if you really do the match the terms, I think these two matches and these two matches. And something here, which also matches if you look at the details, to some extent, not exactly. But you somehow have a-- there's no way to match everything exactly. But you just use this as-- use the beta update as a reference for you. And you know that something already matches. And what doesn't match is this rho t, which I didn't define to this beta t square. And you do some kind of concentration to show that they are similar. And what exactly concentration you show, it's also actually up to the exact terms. And somehow, sometimes you relate the zeta t recursion to the beta t recursion under the hood so that you can show that t doesn't grow eventually. Because you knew beta t doesn't grow. That's what we proved easily. And once you can relate zeta t to beta t, then you can also try to show zeta t doesn't grow eventually. I think that's pretty much the best thing I can do in a short amount of time. And the details are in the notes. Any questions? The [INAUDIBLE]? Sorry. Yeah. Yes. I think it was rt plus 1 when I changed the superscript to a subscript. I forgot. Yeah. Thanks. [INAUDIBLE] So the question is that-- [INAUDIBLE] So I guess the question is that at least in this lecture, in last lecture, we saw two examples where gradient designs converges to a solution that is closest to the-- in tradition. And but why empirically you still have to use the explicit regularization of weight decay. So I think I would like to argue that empirically, the weight decay is actually not very strong. So it's not even clear whether the weight decay is really doing a regularization. Because with the same weight decay, actually you can memorize the training data. You can even memorize-- sorry. You can even memorize training data with random labels. So suppose you permute your label arbitrarily. So that there is no pattern. This is random label. You can still use the same weight decay and train your network with the same weight decay to find a zero error solution. So that seems like that's the weight decay is not really doing that much of a regularization, at least not as strong as the theoretical setting will say. Well, for example, in this case, suppose-- or in this case or the previous case, we're supposed to use weight decay. You usually find the minimum norm solution. Then use a strong regularizer to say you want to find a solution with small norm. Then you cannot fit random labels anymore. So and also, another kind of tricky thing is that the weight decay in practice also has some other effect that regulates, for example, how the batch normalization is working. And for example, I think if you have rationalization, then the model becomes scale environment. So it becomes if you multiply all the weights by 2, technically you don't change anything. But somehow you want to regularize that. You want to kind of regularize that in some way. Because in certain cases, it changes the optimization. So basically, I guess I don't have a very concrete-- this is a good question. I don't have a very concrete answer. But I think the thing we believe is that weight decay is not actually doing a strong work in terms of the standard normalization of a norm, like regularization of a norm. And also, we somewhat suspect weight decay has some other effect to some extent. And also, sometimes the weight decay is not even important. So if you remove the weight decay, you still get pretty good results in certain cases. So I guess that's the best we know for now. yeah. Any other questions? OK, sounds good. I guess I will see you on Wednesday. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_4_Advanced_concentration_inequalities.txt | So last time, in the last three lectures, we have talked about the basics of uniform convergence. I guess just a very quick review. So I think we have proved that the excess risk, this is lecture 2, is bounded by this. This is a difference between empirical and population. Can I share your screen to the Zoom? Oh, right. Thanks. Sorry, I forgot. Thanks for reminding me. It's going to be a problem if I forgot to do that. I'll do that. I didn't join a Zoom meeting here. Sorry. Cool. I guess now probably it's working. Thanks for reminding me. And so we have shown this. So this is what we saw in one of the claim in the lecture 2. So basically, this is saying that you only have to bound the difference between the population and the empirical for all theta, right? So the most important thing is the second term because the first term, we have shown that it's close to-- it's something bounded by 1 over square root of n. So the goal is to show the second term. And we have discussed how to do it for finite hypothesis class and also how to do it for infinite hypothesis class to get with a relatively brute force this quantization technique. And so in the next few lectures, I guess, we are going to-- as I mentioned before, so we're going to have some other techniques to deal with the second term so that we can have more informative parts. And today, we are going to take a small-- in some sense, a small digression, or in some sense, a small preparation for some of the tools that we're going to use for the next lecture. So in the next lecture, what we're going to do is that we're going to bound the expectation of this. So this is the next lecture. And this is expectation over the randomness of the data, right? So this quantity itself is a random variable, right, because it depends on the data, the training data you have because L hat depends on the training data. And next time, we're going to upper bound this by some quantity which is called Rademacher complexity. And so today, we're going to do something that is useful for-- it's a useful preparation for doing this. So I guess here is the plan. So next lecture, we're going to do this. And then next lecture, we also going to deal with the difference between this and this expectation. So that's the plan for the next lecture. And today, what we're going to do is we're going to have some tools that prepare us for proving quantities like this, and so that the next time we don't have to have a small section dealing with the tool in the middle. So I'm trying to prepare us with the right tool for the next lecture. So a more concrete overview is the following. So the goal for this lecture is the following. So suppose you have some random variable x1 up to xn. So they are independent and random variables. So we're going to show two type of inequalities. So the first type of inequality is to show that if you take the sum of these kind of random variables, they are concentrated around the expectation. Basically, Hoeffding inequality is one type of this inequality. We're going to extent Hoeffding inequality to something more general. And the second thing is that we're going to show that for certain type of function F, if you look at a general function, not necessarily just the sum of this random variable, of course, you have to have some restrictions on what the functions F will look like. But suppose you have the right restriction, then you can show that even if you have a function of x1 to xn, it's still concentrated around the expectation of this function. And this will be particularly useful for showing this inequality. Maybe let's call this I here. So because you can-- in some sense, maybe just assume one, this corresponds to L hat theta is close to L theta because L hat theta is of the form like x1 up to-- plus x2 up to xn. L theta is the expectation of L hat. And the second type of inequality will be useful for proving what I said, this equality 1. So because-- if you care about something like this, it's roughly equal to expectation. This is L hat. So then you can view this entire thing as a function. We would add a function of your training data-- of your IID training data. So this is a function of x1 up to xn, where these are the training data. So basically-- so these kind of inequalities are called concentration inequality. The key kind of idea is that if you have a family of IID random variables, then-- first of all, if you take the sum of them, they become like Gaussian and they become concentrated-- kind of like Gaussian like and they concentrate around the mean of this sum. And the same thing also happens if you apply certain kind of functions on x1 up to xn. I will tell you what kind of functions will have these properties. And this kind of inequality is not only useful for what we are going to do next, but also generally pretty useful for machine learning like for statistical learning theory. Because in some sense, if you think about what happens in learning theory, in many cases, basically you are trying to deal with the difference between the empirical distribution and a population distribution, right? So these things will show up in many, many different cases. And that's also one of the reason why I kind of isolate this part as a single lecture to talk about technique. If it's just some tool that is only useful for one lecture, then we can just invoke that as a lemma. But here, I think it's more useful than that. So that's why I want to kind of also show you how to prove some of these things and also what kind of a-- I'm not going to prove all the inequalities I'm going to show today, but I'm going to talk about some of the advanced version of inequalities so that you know that they exist. And then when you need to use them, you can kind of find the right tools. So that's the overview for the lecture. So I guess-- so let's start with the simple version, right, where we're going to have a sum of independent random variable. And we have discussed this before about the-- in the context of Hoeffding inequality. I'm going to have kind of a more comprehensive discussion about this. So let's consider you have a random variable Z, which is equals to sum of x1 up to xn, right? xi's are independent. And so a warm-up is that what if you don't ignore what the structure of Z is, right? So obviously you know that Z is a sum of independent random variable. What if you ignore the structure? So what if we ignore the structure? So you still have something that you can show-- you can still have some inequality that can show that Z is close to the expectation. So here is the inequality, which is called Chebyshev's inequality. I think probably you've heard of this in some of the probability class. So the inequality is saying that the probability that z deviate from the expectation of z by some amount t is less than this thing, the variance of z over t square. So it's pretty intuitive, right? So if the variance of z is small, then you have less deviation from the expectation. And of course, if t is bigger, right, if you look at a bigger window, then there's a small probability-- smaller probability outside the window, right? So in some sense, if you draw this, it's something like-- suppose you have a distribution that looks like maybe this and the mean is here, expectation of z. And what this is saying is that if you look at the standard deviation, z-- right-- so suppose you-- and you look at this, standard deviation of z. And suppose you take t to be something like standard deviation of z times 1 over square root of delta, you plug into this inequality, what you get is that the probability that you would deviate at this t is less than-- maybe let's just write expressively. Standard division of z over square root delta is less than delta, right? So this is saying that if you multiply standard deviation z by some quantity, by something like here-- so suppose this is standard deviation of z times 1 over square root delta, the other is less than 1. Then the probability in this tail is less than delta, right? So this is, in some sense, the weakest form of concentration that you always have without using any structure about the random variable z. However, this is not very strong as we will see. Because if you think about what happens with Gaussian, right? so let's see where I'm missing constant here. So if you think about-- let's see. So if you think about a Gaussian distribution, suppose you know z is Gaussian. So Gaussian z. Then what you know is that-- So suppose z is something like from 0, 1, right? It doesn't matter whether the mean is 0. Let's say, suppose the mean is mu. Then what you know is that z minus expectation of z is less than standard deviation of z times square root log 1 over delta. I guess maybe, let's say, this is just a general Gaussian distribution where the standard deviation sigma. And so with probability, at least 1 minus theta, you have this. So basically, if you have a Gaussian distribution, then what you have is that for the same field of probability delta, here you have a stronger bound. It's log 1 over delta instead of 1 over delta squared. So in some sense, what-- I guess I'm not showing this-- I haven't proved this for you, like you can do the calculation. So in some sense, this is saying that the tail-- you can-- the tail decays faster for Gaussian. So basically, for Gaussian, you only have to multiply a little bit. So suppose this is Gaussian, you only have to consider standard deviation z times log 1 over delta square root. Then you know that the rest of the part has probability less than delta. But if you don't need to know it's Gaussian, then you have to be a little bit more generous in terms of the interval that you draw, OK? So in some sense, the goal that we're going to have is that we're going to show that if your z is a sum of random variable, that it's more like Gaussian, instead of a general-- the worst case z. Or you have a better bond like this instead of the bond like this from the Chebyshev's inequality. And also if you see the-- if you look at it more carefully on the consequences of these two inequalities-- so maybe let's call this number 3 and this number 4. So if you have number 3, then you can-- if you take delta to be something like inverse poly n, then you'll know that with high probability, so at least 1 minus 1 over poly n, z minus expectations z is less than standard deviation of z times square root log n. So basically, you only lose a log factor if you want to make the probability very high. So if you want to make a high probability event, then you only have to multiply by a square root log n to the standard deviation and then the rest of the probability becomes very small. However, if you use 3-- if you use the number-- sorry, this is number 4. Sorry for the confusion. So if you use the number 4 if it's Gaussian like then you got this. But if you use number 3, then if you take delta to be-- if using-- take delta to be inverse poly n, then what happens is that with high probability, you have this statement. With high probability, z minus expectation z is less than std of z over square root delta, which is std of z times poly n. So there's a big difference between the additional factor here. So if you compare this two factor, you have a big difference. So that's why we want the so-called the faster tail of the smaller tail like in the inequality 4 instead of inequality is 3. And a slightly alternative view, which we're going to kind of, in some sense, switch between these two views. They are equivalent. But we're going to switch this very often. So the alternative view is that you can say that z minus expectation z is less than. So for Gaussian, what you have is that if you look at this, if you view this quantity like this, then you have this less than expectation minus 2t squared over variance of z times n. So now, you can compare this inequality, maybe let's call it 5, just temporarily, versus the Chebyshev inequality, 1. So if you look at 1, then this is-- the right hand side is decay with t in polynomial way. So it's 1 over t squared. And if you look at 5, it decays exponentially fast as t goes to infinity. So that's another way to see the differences, all right? So the tail probability for Gaussian distribution is decaying very fast, exponentially fast. But if you use the Chebyshev inequality, you to get a polynomially fast decaying inequality. And that's another way to see the differences. So we're going to look for the faster tail, right? That's what our goal. So the goal is to repeat. So z-- this is like a Gaussian. That's basically our goal. But of course, how do you say, in what sense this like Gaussian? There are multiple different versions. We're going to formalize that. What does it mean more like Gaussian tail? So we-- to do this formally, let's start with some definitions. So actually, we're going to define what is meant by Gaussian-like to start with. So let's say a random variable x with finite mean, this is a one-dimensional random variable mu, which is equal to expectation x is called sub-Gaussian with parameter sigma if the following is true. Let me write it down. It's not very intuitive when you first look at it. So I don't-- I'm not expecting that you can see what this is really mean. But this is the definition for something is close to Gaussian. And this is not very intuitive. But the corollary is the following. So a corollary is that x is sigma sub-Gaussian if the following happens-- implies the following happens. So x minus mu larger than t is less than 2 times exponential minus 2t squared over 2 sigma squared over-- sorry, t squared over 2 sigma squared for every t. So the corollary is probably intuitive, right? So if x is sub-Gaussian if you have this exponential decaying tail bound. So this right hand side decays very fast in K. And as t goes to infinity, it's actually not only exponential in t. It's exponential in t squared. So this is, in some sense, this is a much more intuitive definition of sub-Gaussian. But the formal definition above will be more useful for the mathematical cleanness. But you can basically think of these two as equivalent. Actually, they are somewhat equivalent. Before talking about that, let's say if you recall that if x is Gaussian, if it were literally Gaussian with variance sigma squared, then this inequality, maybe let's call it 6, then this means 6 is true. I didn't prove this, but this is something relatively standard. So if you have a Gaussian with variance sigma squared, then you can-- if you do some kind of calculation, do some integral, which is not super trivial, you have to do some calculation, but believe me 6 is true, right? So basically sigma sub-Gaussian is saying that you have the same property as a Gaussian random variable with variance sigma squared. And also because of this sigma squared in the sub-Gaussian definition is often called variance proxy. So in some sense, if you are sigma sub-Gaussian, then the sigma is kind of like you can think of as sigma squared, you can think of it as some kind of like pseudo variance. It's not exactly the variance, but it's the kind of alternative version of the variance, which actually is probably more important than the variance itself. So that's the rough intuition. And also regarding these two definitions of this corollary 6, maybe let's call this 7, so 6 and 7 are, in some sense, equivalent definitions up to some small constant, up to some constant factor. So what does it mean is that if you use 6 as the definition, then it means-- suppose you used 6 as the definition, or suppose you satisfy 6, then you know that x is O sigma sub-Gaussian under the definition-- under the formal definition. So in some sense, if you don't care about a constant factor in front of the variance proxy, then these two definitions are-- you can-- 7 imply 6 and 6 also implies 7, up to a small constant loss. So basically, the way that I always think about this is that I always think about 6 as intuitively that I think about it. But when I really use the-- when I really need to use some properties about sub-Gaussian, I mean, I really kind of like-- I want to prove something, I typically use 7. And also, I didn't tell you why these two equations are somewhat related, right? It still sounds like mysterious, why they are related. And here is the reason why they are related. I guess what I'm going to do is that I'm going to show here, 6 implies 7. I'm not going to show 7-- sorry, my number is-- my number is different from my number in the notes. That's why I'm confused. I'm going to show 7 implies 7. But 6 implies 7 would require a different proof. But if I show 7 implies 6, you probably would kind of get a little intuition why they are related quantities. So the kind of general intuition is the following, right? So if you look at the Chebyshev inequality, so Chebyshev inequality, how do you prove Chebyshev inequality? So the way that you prove Chebyshev inequality is something like this. So that you say the probability that z is minus expectation z larger than t is equal to probability that z minus expectation z squared is larger than t squared. And then you use the so-called Markov inequality. You say that this is less than the expectation of this random variable over t square. So the last step is using this Markov inequality. Is it called Markov? Yes, I think it is. So which is saying that if you look at the probability of some random variable, maybe it's called y, larger than t, this is smaller than the expectation of y less than t. Because if you have so much mass larger than t, then your expectation has to be high. That's basically intuition. And you can see that the way to prove Chebyshev inequality is that you raise to the power of 2. You raise to the second power. So that means that naturally you can also consider higher power and apply Markov inequality. Again, you get some other type of inequality. So if you consider higher moments, then what happens is that you can get something like this, right? So if, for example, you can say I'm going to look at the fourth power. So the fourth power, this is-- sorry, this is still equal to this, right? Because you just raised everything to the-- fourth power is the same event. So this is equal to this. And then you can use the Markov inequality to get expectation z minus expectation of z to the power of 4 over to t to the 4. So now, you see that you have a better dependency on t, better-- or faster better dependency-- or faster decay, maybe, faster decay in t, right, which is something we are looking for. We are finally aiming for exponential decay in t. But now, we get something better than t squared. We get t to the 4. So but of course, the trade-off is that our top, this quantity on the top, might be bigger, in some sense, than the variance, right? This is the fourth power of the deviation, in some sense. So sometimes, you can get a trade-off, an implicit trade-off, right? So you get a better dependency on t, but you get a worse dependency in the numerator. And you can try to do this with higher powers, like if you raised to the power of 6, raise to power of 8, so and so forth, right? So actually there are, especially if you look at the early works in this concentration inequality, people do raise to a higher power. It turns out that there is a relatively simple way to deal with all the powers. This, which is called moment generating function. So this becomes-- this will make it cleaner. So that you don't have to deal with each of the power and see which one has the best trade-off. So the so-called moment generating functions is exactly this thing that we define in this definition, we use in this definition of defining sub-Gaussianality. So this is the expectation of exponential of the deviation between x and its expectation. So why this is an interesting quantity, so the reason is that if you look at-- if you Taylor expand this, or Taylor expand what's inside, this is exponential of something. So Taylor expansion would be that 1 plus lambda times x minus ex plus lambda squared over 2 times x minus ex squared plus-- so and so forth, right? And if you write it more formally, so this is something like sum over k from 0 to n. And the coefficient for expansion is lambda to the power k over k factorial times expectation. And you switch the expectation with the sum, the expectation of x minus ex to the power of k. So we can see that this moment generating function is really a mixture of different moments, right? You have all the moments, and every moment have a different weight in front of them. In some sense, this is saying that what we are going to do is that we're going to change the lambda. So that you change the relative weight in front of all the moments. So that you can choose, in some sense, the right trade-off between which moment you are going to use and-- so sometimes, if you choose the right lambda, you're going to choose the right-- focus on the right moment and get the right dependency. So that's the rough intuition. And-- if you really do this mathematically, actually, it's even simpler than this. So what you can have is that if you look at probability of x minus ex other than t. Then this is-- so this is formally, the way you do the trade-off is the following. So you look at this and you say, I'm going to raise-- instead of raising to the power, I'm going to use exponential. So this is equivalent to this. The exponential version is larger than-- exponential of lambda t, right? And then now, you use Markov's inequality for this exponential version. So you get expectation e of lambda x minus ex over this Markov's inequality e to the lambda t Markov. And now, you use the definition of the sub-Gaussianality. So you say that, I guess I need to review what the definition, maybe, or you remember it. So the definition of sub-Gaussianality is that the moment generating function is bounded by exponential of lambda squared. That's the important thing, right? So there's a lambda squared in the exponent, it's exponential of some quadratic function of lambda. So unless you apply that, you get e to the sigma squared, lambda squared over 2 in the numerator. And divided by e to the lambda t. So this is e to the sigma squared lambda squared over 2 minus lambda t. And now, you can see that in the exponent, you have a quadratic. And this quadratic looks like-- wait, am I doing the right thing? So this is a quadratic that looks like this, right? Something-- maybe not-- there's maybe some-- this is a quadratic that looks like this, right? And you can choose lambda whatever you want, right? That's a free parameter. So that's why you want to choose the minimum lambda and minimize this quadratic. So that you get the best bound. So if you take the best lambda, which means that you want to find a lambda and minimize this quadratic. That's relatively easy. You can just take the-- smallest lambda is the global minimum. You just do the derivative and make the derivative to be zero. And the best lambda, it turns out to be t over sigma squared. And you plug that in, then this is equal to e to the minus t squared over 2 sigma squared. So basically, we show this is equation 7, right? This is the equation-- this is the second, this is the corollary, I think it's the equation 6. So basically, you start with the Gaussian, so here you use the definition. So basically, use the definition of the sub-Gaussianity, and you get this tail bound for this random variable. And also you can get the other side. So here, you only know that x is not too much bigger than the Ex plus t. You can also get the other side. Less than minus t, and how do you do that? The truer thing would be that you just flip. You define x prime to be minus x. And then probability that x prime minus ex prime is larger than t is the same as probability x minus Ex is smaller than minus t. But just by a simple definition, and then you apply what we have already got on x prime. And then that implies what you have-- the other side of the bound for x. But this is not super important. It's just that the two sides are basically the same for our purpose. OK. I think OK, so what happened so far? So I have defined this sub-Gaussian random variable and have argued that the sub-Gaussian random variable is basically saying that you have two ways, right? So one way is that the sub-Gaussian random variable basically means that you have a very fast tail, a very fast decaying tail. Or the moment, some kind of moment, you can think of e to the lambda x minus mu as the moment, some kind of moment is bounded. And all the moments are bounded by something in this form. So far, I only talked about one random variable, right? But the reason why I care about this is the following theorem, which is the mean just in some sense. So saying that if you have-- if you have all the xi's, all the independent random variables are sub-Gaussian, the sum of them is also sub-Gaussian. So you can compose. And that's the biggest benefit of this sub-Gaussianality. Our independent sub-Gaussian random variables with variance proxy sigma 1 squared up to sigma n squared, respectively. Then if you look at sum of them, it's also sub-Gaussian with variance proxy sum of sigma i square from 1 to n. So this is as a corollary, because the sub-Gaussian with this variance proxy, you know that you have the concentration for z, which is of this exponential form. So you know that you have this tail, that decays exponentially fast. So this is very important, because-- very useful and very important, because now, if you have a sum of independent variables, you want to know how fast the tail decays. You can look at whether each of them is sub-Gaussian. I'm going to prove this in a moment. The proof is actually just the two lines, which is actually very cool. So but before proving this statement, let me try to give you some examples on what random variables are sub-Gaussian. It's basically-- the applicability of this theorem depends on whether you can show each of the xi is sub-Gaussian. If you can show each of the xi is sub-Gaussian with very good parameter sigma i's, then the theorem applies. And you got a pretty good bound for the sum of them, right? So what random variables are sub-Gaussian, right? A single random variable are sub-Gaussian. So there are some examples here. By the way, whether your random variable is sub-Gaussian sometimes depends on what sigma you choose, right? So if you choose bigger and bigger sigma, there is at least either more chance that they can be sub-Gaussian. Of course, it's not like-- it's not guaranteed that if you choose sigma to be really, really big, you can be sub-Gaussian. That's not always guaranteed. But at least intuitively, it's not a binary question. It's not saying this one is sub-Gaussian, this one is not. Sometimes, it depends on what parameters you choose. So at least it's not always a binary question. For example, for Rademacher complexity is also called Rademacher variable, Rademacher variable just means that random variable, that's it. So basically means x is uniform from plus or minus 1. So this one, I claim is sub-Gaussian. The reason is-- intuitively, the reason is that if you look at this random variable, if you look at density, is something like you have a spike at 1, spike at minus 1. So basically the density decays very fast after you go outside plus or minus 1. It decays extremely fast. It becomes 0. That's why it's sub-Gaussian. And technically, you can say that you can prove this larger than t is less than 2 exponential minus t squared over a big constant c0 for c0 to be O of 1, maybe let's say 2. This is because if t is less than 1, then right hand side-- is less than 1-- is equal to-- is bigger than 1. So that's always true, right? I think I choose this so that-- yes, because the right hand side is equal to exponential minus 1 over c0. And if you take c0 to be a big constant, maybe 2, then this is larger than 1. So you verify this for c less than 1. And then if t is bigger than 1, then the LHS is just zero. So that's why it's also true. So that's Rademacher random variable. So that means that Rademacher random variable is O of 1 sub-Gaussian. Sub-Gaussian with variance 1, with variance proxy of 1. And similarly, you can prove that-- similarly, if x minus e of x is bound by m, So basically suppose you have a random variable where e of x is here, and if you look at a window of size m plus m minus m, all right? So suppose your density is literally 0 outside, and you have some maybe whatever density we want. And literally 0 outside. Then once you go beyond the window m, then the density decays extremely fast. Density just becomes zero. So that's why this is O of M sub-Gaussian. It's not-- you still need to-- to formally prove variables you need kind of to construct-- you still need to verify the definition, of course, right? But I guess it's kind of intuitive that it's sub-Gaussian, just because the tail vanishes completely after you have the window M. And there is a stronger claim, which also got the right constant here, I only have O of m. But you can actually get a stronger claim, which get the exact height constant. So this is saying that if a is less than x is less than b, almost surely, so your random variables almost surely bounded between a and b. Then you can prove this, e to the lambda x minus ex. This kind of generating function is always less than e to the lambda squared times you want the quadratic in e lambda squared, quadratic lambda in exponent. And you care about the constant, because the constant is the variance proxy. And you can prove that this constant ea minus squared over 8. And this is saying that x is sub-Gaussian with variance proxy b minus a over 4. And this is actually a homework question. It's not that trivial to prove it, actually if you want to get a right constant. If you just want to get some constant, I think if you want to get instead of 8, you want to get 2, it's relatively easy. If you want to get 8, you need to do a little bit slightly more about it. We'll have some hint in the homework as well to help you to prove it. All right, so these are about all-- so this is all about bounding random variables. Basically, this saying that if you have a bounding random variable, it's going to be Gaussian. And also this works for Gaussian random variables, of course, right? So a Gaussian random variable has to be sub-Gaussian, right? So as we motivate it, right, so if x is from mu sigma squared, then I guess formally, you can prove the following. You can show that e to the lambda x minus ex, you can compute this. Actually, this is equals to exactly e to the sigma squared lambda squared over 2. So that's why it's sub-Gaussian with variance proxy sigma squared. I think these are the-- bounding random variables and Gaussian random variables are probably the most important examples of sub-Gaussian random variables. And just a small-- in the homework, we're going to talk about something called sub-exponential variables, which is a weaker version of sub-Gaussian random variables. And this is precisely to deal with the fact that some random variables are not sub-Gaussian, whatever variance proxy you choose. So just to give you a rough sense on what the homework is about, so here, when you define a sub-Gaussian random variable, you can-- in this corollary view, so this alternative view, here you have t squared. So you insist that the decay is exponential in t squared. And that's a relatively strong requirement. And there are random variables that doesn't have this fast decay. So for example, I think one typical example would be if you take the Gaussian square, if you square the Gaussian, which becomes a-- I'm blanking on the name of what's called, chi squared distribution, right? So that one doesn't have this fast decay of tail. It's not t squared. It's t. So for these random variables, you still want to prove something about concentration. And you can still do them almost the same as sub-Gaussian random variables, with some minor technique-- with some technical kind of differences. And that's what the homework, one of the homework questions, your homework 1 is about. So all right, cool, so any questions so far? OK, so now, let's prove this theorem about this additivity of a sub-Gaussian random variable. So proof of theorem. So our goal is to show that the sum of xi is sub-Gaussian. This is the goal, all right? So we just use the definition. We start with a definition. The definition is that if you wanted to prove it to be sub-Gaussian, you need to look at the moment generating function. I have some-- some type of [INAUDIBLE].. OK. So you look at the moment generating function. And so here, you can see the nice thing about this, which is that you can-- because this is exponential, it can decompose very easily. So you can write this as exponential lambda x1 minus ex1. And again, because it's independent, you can switch the expectation. You can factorize. Each of the xi's are independent. So you can switch the expectation with the product to get expectation e of lambda x1 minus ex1 times expectation of e lambda x2 minus ex2. OK. So this is using independence. And then you just say, I know that each of these random variables is sub-Gaussian. So I just bound-- use my definition that each of the random variable is sigma i squared sub-Gaussian. So you bound it by e to the lambda 1 lambda squared sigma 1 squared over 2 times e to the lambda squared sigma 2 squared over 2. This is by definition. And then you got this is e to the lambda squared over 2 times sum of sigma i squared. And you get this. That means that sum of the xi's is sum of sigma squared sub-Gaussian. So this is the variance proxy for sum of xi's. And you can see that the benefit of using this moment generating function, the exponential, is because you can factorize the exponential easily, right? So if you don't use exponential, if you use the 4th power or the 8th power, right, you wouldn't have such a nice, simple proof. And are there questions? OK, so that's the first part of the lecture, right, which is about a sum of independent random variable. And now, I'm going to talk about a more complex function of independent random variables. So now, I'm going to talk about something like this. How does this kind of things concentrate? And you can see that in some sense, you want to say that this function F, when F is kind of close to a summation in some sense, in some weak sense, then you still have very similar type of bound. That's the spirit. But what does it mean by close to summation? We'll see. So here is the theorem, one of the theorem, which is actually something we're going to use in our future lecture, which is called McDiarmid inequality. So there is a bunch of conditions. So suppose you have a function f, I guess, little f is the capital F I wrote before. So you have a function f satisfy the so-called bounded difference condition. What does the bounded difference condition mean? So it's saying that for every choice of x1 up to xn and xi prime. So I guess for every i and every choice of x1 up to xn-- by the way here, these xi's are little xi's, because here, I haven't got any random variables yet. These are just the generic numbers. So for every i, for every choice of x1 up to xi, and for every xi prime, which is-- which will be used as a replacement for xi, if you look at these two qualities, one is that you apply f on x1 up to xn. And the other one is that apply f on x1 up to xn. But replace xi by xi prime. So basically replace one coordinate by something else. And you look at-- if you look at what kind of changes you can make by doing this. And you assume that the maximum changes you can make is by ci. So basically, this is saying that you are not very sensitive to-- this function is not sensitive to changing a single variable, a single input, a single coordinate of the input. And if you have this bounded difference condition, then you can say that X1 up to Xn, now they are capital X, independent random variable. And we have probability that f x1 up to xn is deviate from its expectation by t is less than this exponential thing minus 2t squared over sum of ci squared from 1. So in other words, I guess equivalently, you are basically saying that-- you are essentially saying that fx1 up to xn, this is sub-Gaussian with variance proxy something like sum of ci squared, a big O. There are some constants that you may lose by doing a limit. Your variance-- this is using the equivalence of the two definitions, right? So this is the more intuitive definition of sub-Gaussian. And if you change to the formal definition, you will lose the constant. Can I ask? You suggest that we were removing this as functions of f are kind of input sums. But would you say that those conditions-- so if it looks like a sum, but you could have-- if xi prime and xi differ by greater than ci-- Yeah, so-- yeah, that's a very good question. So I think before, I forgot to repeat a question. So from now on, I should try to repeat a question. The question was that I mentioned that you want to make some conditions on f, which make it similar to the sum. So and why this is similar to the sum? So first of all, I think a small clarification, I guess, by similar is actually a very weak sense. You'll see that in some sense, all of these conditions becomes, in some sense, not very similar. But I think they are only similar in the sense that you want to make sure that no coordinate is very strongly influencing your final outcome. So when you have a sum, so if you change one coordinate, you wouldn't influence your final outcome much. And here is the same thing. So basically, I think whether it's a sum or not, it doesn't matter. It's really about whether you have certain kind of Lipschitzness property. So maybe just briefly, also, we can verify that this condition contains the sum, at least. So that probably would be useful. So suppose you have fx1 up to xn is equal to sum of xi. And each of the xi is bounded by something like bi and I don't want to put ai. And now, suppose you change one of the xi, how much you can change the final outcome? So then you can say that you have the bounded difference condition where ci is equals to bi minus ai, because that's the biggest change you can make if you change one coordinate xi. So that's the maximum kind of range of changes for the sub. But you can see that-- you can imagine many other functions that have this property, which doesn't look like sum at all, all right? So indeed, more precisely, I think the kind of the intuition is that you want this function f to be somewhat Lipschitz in some cases. Lipschitz are not super sensitive to individual things. Yeah, that's the general intuition. [INAUDIBLE] Right, so the question was why you just don't-- why don't just assume that f is Lipschitz, right? So this is a very good question. And the very short answer is that we don't know how to prove that version. We don't know how to prove that if f is Lipschitz, then you have this result. And a longer version is that people have been actually trying to-- this is very-- a lot of researchers, especially mathematicians, have worked on this area. And there's a question about what's the right definition of Lipschitzness. I guess you probably will see in a moment, I'm going to show two more general version. And they have a different definition of Lipschitzness, or the intuition of Lipschitzness. And they are somewhat complicated. It's not as clean as you expect, just mostly because there are some technical challenges in those cases. And you will see also a case where if xi is sub-Gaussian, then you have a very clean theorem. It just literally, as you said, you just assume f is Lipschitz. We'll get to that in a moment. [INAUDIBLE] Right, so I guess your question is that here, you need this absolute bound in some sense. In some sense, to make sure you have this bounded difference condition, right, so you need some things that kind of absolutely-- to be absolutely bounded. For example, in sum case, where you need xi's to be absolutely bounded between ai and bi, right? And this is not very-- this is a little bit different from the intuition we had about sub-Gaussian. Before, we were saying that if each random variable has a fast tail, then the sum also has a fast tail. But here, you need absolute-- some kind of absolute restrictions, right? So this is actually related to the answer I had before. If you look at all the technical details, actually, it's not that easy to deal with a tail that can go to infinite. So there are some technical challenges here, which prevent us to have something super clean, I would say. So for example, if you know xi is sub-Gaussian, we will see that you have a very clean theorem. But if you don't know xi is sub-Gaussian, then this is kind of technically very complicated to deal with the tail of each of the xi. And in some sense, you can imagine, right? So maybe this is all a bit too advanced, but for example, if you have xi, so the tail is sub-Gaussian. Suppose xi is just Gaussian. And if f can square it, so suppose in the function f you square xi inside somewhere. Now, xi becomes xi squared. And the tail becomes slower, as I said. So when you square it, it becomes chi square distribution. The tail becomes slower. And if you take the fourth power, it becomes even slower. So you have to somehow balance this, right? It's not only about the input. It's also about what f does, right? If f does something super bad to-- for example square the Gaussian or raise the Gaussian to the higher power, then the tail becomes slower. And your concentration becomes worse. So that's kind of the challenge. Yeah, so let me proceed with a more general version. And then I'm going to talk about the Gaussian version. And then at the end, suppose I have time, I'm going to prove this theorem. So this theorem is something we can prove ourselves, without doing a lot of hybrid work. But the theorem I will introduce next is somewhat kind of a very challenging proof. So this is a more general version. So I think this is theorem 3.18 in the reference book by-- I guess if you look at the lecture notes, there is a formal-- this is van Handel. So it's a book on probability theory. So in this book, they basically-- what happens is that they extend this bounded difference condition to something milder. And the definition-- so you start with some definition. This is b minus i. Let's define this to be fx1 up to xn minus you take the e over z fx1 up to x minus 1 z. So basically, you're saying that if you look at x and you change one of the coordinate, and you want to see how much you can make it smaller. So because this quantity is always larger than zero. So basically, you are saying that how much you can make it smaller by changing one coordinate z, right? e, you can just think of e as mean, so minimum, right? So the difference between this and before is that before, you require. So basically before, in McDiarmid, you require di minus fx to be less than ci for every x. But here, you don't make that-- you don't insist that now. You have-- at least you have an x as an argument of this dif, right? So it defines sensitivity at every point. You have-- you didn't assume a global sensitivity thing. You talk about a sensitivity at x. That's a quantity. And then you can also define the sensitivity on the other side, which is just sup. And now, these are two functions that measures the sensitivity at every point. But they are not global sensitivity. And now you can define a global sensitivity, d plus, which is the sup over all x1. Now, you take sup. But before taking sup, what's inside the sup is the sum of this squared. So basically, the-- let me just write down all the definitions and then interpret them. This is minus 1. And then maybe let me write a conclusion. So you get that probability of x1 up to xn minus expectation f larger than t is less than expectation-- is exponential minus t squared over 4 d minus. So you have a little bit different bound for upper side and lower side, which is probably not important for many cases. But just for the sake of completeness, let's write both of them. 4d plus. So I guess x1 and xn are independent, of course. So that's the-- so this is the theory. So I guess the important thing is that what is this d plus and d minus? And how does it different from McDiarmid, right? So basically, I think the difference is that when you do the ci in the McDiarmid is you take sup over x1 up to xn. And then di plus fx 1 up to xn, your first x sup, all right? This is a ci, right, which is a global sensitivity for the i-th correlate. And then the sum of ci squared, the variance proxy in the McDiarmid is you take sum over i from 1 to n. Then you take sup over x1 up to xn. Pi plus f x1 up to xn squared. So basically, you look at a global sensitivity for every accord. And you take the sum over it. And here, the difference is that this d plus or d minus, so they are-- you are first looking-- you are taking sum of the sensitivity over all coordinates at this point x, you first take the sum. And then you take the sup. So it's probably-- it's not that easy to find a concrete example to see the differences of these two. But I guess you can imagine the order of doing the sup and the sum does matter. So it's possible that, for example, you have a point x, such that only for one coordinate, you are very sensitive. And for other coordinates, you are not very sensitive. So then you take the sum. And then take the maximum. It's more advantage to do that. And in some sense, I think the mathematicians spend a lot of time thinking about how do you change the order. So the best thing you want to do is you take the sup at the very end, so like this one. This one, actually, there is a small sup somewhere in the middle, because in our definition of p of f, you still have this inf. So the best thing would be that you just define the sensitivity for every thing, like a gradient. And then you take sup at the very end, which is what I'm going to show for Gaussian distribution. But this is the best we can know for general distribution, right? So you look at a sensitivity at every coordinate. And you take the sum of all the sensitivity. And then you take sup of f. But the sensitivity have to be defined-- to be defined in this instance. Does it make some sense? Yeah. I'm not expecting you to understand all the nuances. I don't even understand exactly all the nuances. I need to open a book to see-- to find the cases where there is a difference. I think there are actually, indeed, quite some differences between these two inequalities. But it's not like-- you probably wouldn't be able to see the differences. OK, and now, let's answer this question about what happens if all the xi's are unbounded, right? So what happens if x1 up to xns are unbounded? If these are unbounded, like Gaussian random variable, even you take f to be your sum, you probably wouldn't satisfy the bounded difference condition. You wouldn't satisfy this condition here either in this improved case. Because here, there is an inf here. So even f is a sum and xi's are Gaussian, this one would be infinity. Because there's no bound for any individual-- there's no absolute bounds for any individual random variable. So that's the next question. How do we deal with the case when x1 up to xn are not bounded? And there are some existing results along this line. So the first result is called Poincare inequality, which is one of the very beautiful results also for other reasons, not only for the reason for concentration inequality, but also for other reasons not related to this course. So this inequality is saying the following. So if x1 up to xn are Gaussian, which means 0 and 1. And you have some function f, and you can look at the variance of this function. You didn't prove that this is sub-Gaussian, you only showed a bound on the variance, which is something necessary to have. So if you don't have a bound on the variance, you probably wouldn't be able to show it is sub-Gaussian. The variance is less than-- this is exactly as suggested before in the question. So this is less than the gradient squared. And you take expectation of the random variable x. So this is the expectation of the gradient of this random variable. So this is, in some sense, the ideal type of right hand side that you would hope for. So the concentration of this random variable f is somehow controlled by how sensitive, how Lipschitz the function is. So this is the idealistic and basically best kind of thing you can hope for. But the limitation here is that on the left hand side, you only control the variance. You didn't control the tail explicitly. So if you want to turn the variance to the a tail bound, you have to use the Chebyshev. You get 1 over t squared bound. And you can also deal with this with other kind of Gaussian variable. It doesn't have to be mean 0 and 1. That's easy. And the strongest thing here is the following. So here is the stronger theorem, which we can deal with the tail. So here, you suppose f is L Lipschitz with respect to Euclidean measurement, Euclidean distance-- sorry, distance, Which is saying that fx minus fy is less than L times x minus y squared for every xy. So in some sense, this is saying that-- basically, this is saying that the gradient of fx is uniformly bounded by L, right? So you can see that this is different from this one, because here, you require the gradient, for every point, to be less than L. And above, you require the average gradient to be something small. So here, we make a strong assumption to say that this function is just the global Lipschitz. And then you can have a stronger bound on the tail. So now let x1 up to xn be id from Gaussian. And now you can have the tail bound that would move like f1. So this larger than t is less than 2 exponential minus t squared over 2 L squared. So basically fx is L sub-Gaussian, maybe O if I have sub-Gaussian. But the L is not expected gradient. The L is the absolute bound on the gradient. So you can kind of see the flavor of all of this concentration inequality, it really depends on when you take the sup when you take the expectation. For different kind of conditions, you can have different theorems with different strengths. Any questions? [INAUDIBLE] I don't think I know the exact result off the top of my head. I think-- I think the higher moment-- could you get a higher moment from the one below? I guess, I think if you want to have higher moment, you have to assume something stronger. That's my hunch. So for example, this one below will give you a higher moment. So I'm not sure whether you can have a higher moment bound that has weaker conditions than this. I don't know. Also, I don't know too much about PDEs. So I could miss. I don't know everything. This is the only thing I know. But indeed, this Poincare inequality has a lot of different applications, not only here. So we have-- this is-- we have 15 minutes-- we have 10 minutes. So it's a little bit challenging for me to give the full proof for the McDiarmid inequality in 10 minutes, but I think I would try a little bit. If I couldn't have the full proof, I can give you a sketch. So that's the last thing I was planning to do. So for all of the inequality above, like this Poincare inequality, this tail bound for Gaussian, I think they are beyond the scope of this course. We are already doing a lot of things in the technical part. So these things, probably, even I'd do it, I would just invoke a theorem from a book. So you don't need to know the proof. For the McDiarmid inequality, I don't think you need to know the proof. But I think the proof is kind of interesting to some extent. So it's probably worth showing. So let's try that in the next 10 minutes. So we care about bounding. We care about something like this. And we have the bounded difference condition. And the high level intuition is that you want to-- so this one can correlate f of x1 up to xn is kind of like something-- it could be a very complex function, complicated function of x1 up to xn. But somehow, you still want to reduce it to a sum in some sense. But a reduction is not that-- it's not straightforward, the reduction is like this. So the way you do it is the following. So you say that-- you defines a sequence of random variables. Let's define z0 to be the expectation of x1 up to xn. So this is just nothing. It's just a scalar, which is a constant. And then define z1 to be expectation of f x1 up to xn conditional x1. So what does this mean? This is a function. This is a function of x1. So basically, z1 is a function of x1. But you average out all the other xi's. And you can also define zi, which is the expectation of x1 up to xn conditional the first i random variable. So this is a function of x1 up to xi. So given x1 up to xi, this becomes a scalar, because all the other randomness got ever stopped. So in some sense, you can see that z0 doesn't have any randomness. z1 has a little randomness, because it's a function of random variable, x1. So it's a random variable. And zi has more and more randomness. And zn is finally what you care about, which is the fully random case. And the important thing is that you care about zn minus the z0, the f minus the expectations. And you can decompose this into a sequence of things. So like this telescoping sum. And this is what I mean by reduction to the sum. So basically, now you have a sum of random variables. And you somehow kind of think of them as independent in some sense. They're definitely not exactly independent. But you're going-- we use the proof that you use for the summation. That's what we want to see. And if you-- look at this, right. So this is a function of x1. And this is a function of x2, of x1 and x2, so on and so forth. And this is a function of x1 up to xn. This depends on all the random variables. OK? And now, let's try to see what we know about each of these z and zi minus zi minus 1. All right? So first of all, we know that for every zi, if you take expectation of zi, this is expectation of-- expectation of f x1 up to xn. So in the inside, you have a function of x1 up to xi. And then also that you averaged out all the randomness of x1 up to xi again. So this is-- so this is equals to this expectation of f by-- this is called a total law of expectation, right? You take the expectation of the conditional thing, then you get the expectation. So this is equal to this, which is equals to basically the z0. And then, this means that the expectation of zi minus zi minus 1 is equal to zero. So each of these random variables in this decomposition is mean 0. Unless you have-- so basically, the intuition is that this would define zi to be zi minus zi minus 1. What you're going to do is that you're going to have-- in some sense, you want to bound the moment generating function of each of the di. And then you say that because the final thing is a sum of the di, you can bound the moment generating function of the sum of the di. So let's work on each of the di first, right? So I guess I'm going to claim that zi minus zi minus 1 is always less than ci, where the ci is the bounded difference condition in the condition of the McDiarmid inequality. So how do I do that? I guess-- let me see whether I can simplify this proof a little bit for the sake of time, I guess it doesn't. So let's only prove it for z1 minus z0, just in the interest of time. So if look at z1, z1 is expectation of x1 up to xn condition on x1. And if you-- so I guess you can replace the first one by the sup over all the possible choices of x1, right? And after you do this, this quantity is not a function of x1 anymore. So it doesn't matter whether you condition x1 or not. So you literally just get expectation sup fz x2 up to xn. So-- let me see. And also, you know that z1 is bigger than expectation if, for the same reason, xn. So sometimes, you have some kind of upper bound lower bound for z1. I guess these two qualities are not exactly useful for the bound. What's really useful is this. If you look at z1 minus z0, this is then-- so this is expectation of x1 up to xn conditioned on x1 minus f x1 expectation f x1 up to xn. So you can bound this by expectation sup using what we have done above. And minus expectation of f x1 xn. So here, both the-- and then you can put this inside. I think it's slightly confusing when you really look at the math. But intuitively, what you're saying is that what the difference between z1 and z0 is only one coordinate. But we know that if you change that one coordinate, you cannot make much difference, right? So that's what we know. For any x2 up to xn, if you changed only x1, you wouldn't make much of a difference. That's why z1, z0 wouldn't make much of a difference, because the only thing different is x1. OK, but maybe let me have the formal proof. So on the other hand, you can also prove that-- the same thing. So you can prove this is larger than inf z. So basically, I'm basically trying to say that the difference between z1 and z0 is upper and lower bounded by the extreme lower case, right, where you pick your z in the worst case. And this means that if you define this to be a1 and you define this to be a2, then a-- maybe let's call this-- sorry, let's call this b1 and this a1. So you have upper bound and lower bound on z1 minus z0. And you can show what's the upper bound and lower bound. So the b1 must be a1. So this will be expectation sup. That's the extreme low case minus the inf, all right? So this is exactly the ci's that we defined, right? So if you change your random variable-- you change your inputs in the first coordinate, what you can change. So the maximum change is c1. So this is less than c1 by condition. So basically, this is saying that z1 is between b1 and a1. And b1 minus a1 is less than c1. So this is saying that each of the random variables z1 minus z0 is bounded in your small interval. And similarly, you can also show that zi minus zi minus 1 is bounded between-- is bounded between something like bi and ai. And bi minus ai is also less than ci. So recall that our final goal is the zn minus z0, which is the sum of zi minus zi minus 1. And we have proved that each of these random variables is somewhat bounded in some small interval. And now we can use the moment generating function. So what you do is you say you take expectation of lambda zn minus z0. And this is expectation e of lambda sum of zi minus zi minus 1. So the first thing we have to do is to factorize them in some way, right? So how do we factorize them? We just use the conditional-- we kind of do the chain-- in some sense of the chain. So what you do is that your first condition down, x1 up to xn minus 1. So then you have this expectation e lambda zi minus zi minus 1 conditional x1 up to xn minus 1. And when condition on it, you get this. And then the rest of the things, it's a function of xn up to xn minus 1. All right, so this is-- what's inside your condition x1 up to xn minus 1, this one only depends on-- so this is a function of x1 up to xn minus 1. And this is the function of xn. So that's why it's inside that expectation. And then this term, because zn minus zn minus 1 is bounded, and it's bounded in a strong sense, in the sense that for every possible choice of x, so you know that this is a kind of absolute bond for zn minus zn minus 1. So we know that this, lambda zn minus zn1, this is less than exponential of lambda squared sigma cn squared over 2. This is because if you have a bounded random variable, and we know that it's sub-Gaussian. So you can verify this in various ways. One way to do it is to just-- actually, this will show up in the homework. This is one of the homework questions we defined. So if you have a bounded random variable, it's sub-Gaussian, right? And you can bound the moment generating function. And then you can replace this term by-- this absolute quantity cn squared over 2 and times the sum of the other terms. I think this is n minus 1. And then you peel off the second term again and again. So you do this iteratively. I guess given that we're already running out of time. So look at this. So if you have-- you can do something like this. I guess this is actually 8 if you really do it carefully. So yeah, I guess I will just sketch this. So this means that f minus expectation f is equals to sum of zi minus zi minus 1 is sub-Gaussian with variance proxy sigma squared. I guess that's the end of the proof. But this proof is optional. It's just that we have more time. So that's why I show the proof. OK. Any question? What was the step before the equation in the blue circle-- is that-- [INAUDIBLE] At the end of that line, is that just based on the [INAUDIBLE].. You mean this one? Yeah, so from here to here? This is just-- it's just a triple step, I guess, maybe technically what I should write is maybe-- let me do this here. So if you want to do two steps, the first thing is you do this. Sorry, you do-- you just-- do the total expectation. You condition-- you first condition x1 up to xn. We do this, right? So this is the law of total expectation. And then you find that this term is a constant when you condition on x1 up to xn minus 1. So that's why you can move it outside. Yeah, there's nothing deep there. OK, sounds good. OK cool, so I guess see you next Monday. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_19_Mixture_of_Gaussians_spectral_clustering.txt | OK. I guess, let's get started. Let's see. Is this working? Yes. So I guess last time, we have talked about unsupervised learning. And today, we're going to continue with unsupervised learning. And first, we're going to continue with the moment method. And here we're going to talk about higher order moments. And then, next, we're going to talk about something called clustering, or spectral clustering in more technical words. So these are different type of unsupervised learning algorithms. So I guess just to continue with what we had last time, last time, we ended up with this mixture of Gaussians. The setup was that you have some x which is sampled from a mixture of k Gaussians with mean mu i and covariance identity. And so last time, I think, at the beginning, we talked about case 2, where you have a mixture of two Gaussians. And in that special case, you can just take the second moment of the Gaussian to recover the mu i's. And then we moved down to talk about cases bigger than two. And in that case, we have argued that if you take the second mixture, if you take the second moment, then this is something like 1 over k times sum of mu i mu i transposed. And this is not enough to recover mu, mu i's, because, given this second moment, you still cannot identify the mu i's precisely because there are multiple mu i's that can have the same second moment. So that motivates us to consider this third moment. So the third moment is-- as we discussed, it is the expectation of x tensor x tensor x. So this is the third-order tensor in dimension d by d by d. And let's compute what's the third moment with the hope that the third moment will tell us enough about mu i's where we can recover mu i's from the third moment. And that's indeed the case. So what we do here is the following. So we compute the third moment. And I guess the initial step is always the same because you have a mixture of k clusters. So what you do is you have 1 over k times the sum of the moment conditioned on each of the cluster i where i is the cluster ID. And now the question becomes that if you have a Gaussian drawn from-- if you have an x drawn from Gaussian, then what's the third moment? What's the expectation of the third moment? So what is this expectation of x tensor x tensor x conditioned on i? So let's do some kind of simplification just to-- this is an abstraction in some sense so that we can make the notation simpler. So suppose z is from some Gaussian with mu-- let's call it a just to distinguish it from the-- a and covariance identity. And the question is, what is-- you have this lemma. I guess the condition is this. And then our question is, what is the expectation of z tensor z tensor z? And the claim is that this is pretty much equal to a tensor a tensor a but with some caveats. There are some other terms which are something like this. Let me write it down and explain. So this is from 1 to d, expectation of x tensor el tensor el plus the expectation el tensor expectation x. So this is a formula-- oh, sorry, my bad. This is not x. This is z. There's no x in this lemma. We already changed our notation to z. So note that expectation z is really literally a. So, basically, you already have a formula that expresses the third moment of z into a function of a. That makes sense because a decides everything. So everything at the end of a should be a function of a. So the reason why we still use ez in this formula is because we want to implicitly say that this is something that is about the first moment. So maybe the more important thing is that this means that you can compute-- so we can compute a tensor a tensor a from linear combinations of the third moment and this first moment. Why is it useful to get this? I think this will be clearer, why it's useful to get a tensor a tensor a, it's useful-- or we'll see it in a moment. But this lemma tells us that if you know the first moment and third moment, you can get a tensor a tensor a from-- sorry. I'm messing up this letter here. So here is the z. OK. And let's see. I think I would-- any questions so far? I guess it's not exactly clear why this lemma is useful at the current point. I guess the main point is that you can compute out what's the third moment when z is just a Gaussian. And I think the proof-- and I'm going to show the proof-- I think the proof is nothing super interesting, but it tells you how to do this kind of derivations for the moment. And once you see it once, then all the others become kind of trivial. So how do you compute the third moment? So what you do is you do it for every entry. So you say, look at the ijk entry of this thing. Then this is just expectation zi zj zk, where zi denotes the coordinates, the i's coordinates. And I think there's something. Sorry for the note takers. I think I changed my rotation to v here just to be consistent. Let me go back, change this to v. It's just a generic variable. It's just somehow, later, I used v. So let's change this to v. So what we do is that we-- just to compute this moment, we just, in some sense, do this in a brute-force way. So what is zi zj zk? You can write zi, which would be vi plus some xi. And vj and cj can be vj plus xi j. And zk will be k plus xi ck. We're using the fact that z as a vector is equal to v plus some ksi where ksi is from-- it's a spherical Gaussian. That's my definition of ksi, in some sense. Because ksi is equal to z minus v, which has a spherical Gaussian distribution. And then you can-- and, by the way, vi is the ith coordinate, just to be clear. And then we can expand this so there are eight terms in this product. So what are them? So one of the term is vi vj vk. That's easy. v is deterministic because c is random. And some of the terms look like expectation vi vj xi k. One of the terms is this. Another term is expectation vi vk xi j plus expectation vj vk xi i. And these terms will be equal to 0 because the expectation of ksi is 0, and v is a deterministic quantity. So that's why they are going to be 0. And then we have the other three terms that looks like expectation vi ksi j ksi k plus expectation vj ksi i ksi k plus expectation vk ksi i ksi j. So these terms is a little bit different. Let me deal with it in a moment. And the last type of term is the product of the three xis. So how do we deal with the rest of the four terms? So the thing is that if you look at expectation xi i xi k, this is equal to what? This is equal to 0 if i is not equal to k because if i is not equal to k, ksi i and ksi k are two independent random variables. And you can factorize it to get expectation of ksi i ksi k. They're both 0, so you get 0. And this is 1 if i is equal to k because-- OK, maybe let's have more steps. So this is equal to expectation ksi i squared, which is equal to 1, if i is equal to k. So in summary, expectation of ksi i ksi k is equal to indicator that i is equal to k. And you can also try to do this with ksi i ksi j ksi k. And here you can still try to do the same thing, try to divide into different cases, whether i, j, k are all the same, or maybe two of the i and j's are the same, and k is different. There are a few cases. And actually, if you enumerate all of those cases, it turns out that it's always 0, regardless of the choice of i, j, k, but for different reasons. For example, when i, j, k are all the same, then this is the third power of xi i. So it equals to expectation xi i q. And that's 0 because the third moment of Gaussian is 0. And when i is equal to j but not equal to k, you can do another different calculation. But generally, you can do all of calculations, and they're all equal to 0. Sometimes, I think the reason-- fundamental reason-- is that as long as you have-- even if you have all degree polynomial, all degree monomial of these ksi i's it doesn't matter. It's always going to be 0. The expectation is always going to be 0. So these are all, in some sense, elementary calculations. And then if you use this, then you can continue here. You can get-- this is expect equals to vi vj vk plus vi times the indicator j is equal to k plus vj indicator i is equal to k plus vk indicator i is equal to j. And this pretty much completes the proof. So then you just have to rewrite this in a tensor form. So if you verify, I guess, how do you write it? I guess if you verify this equation, the target equation, entry by entry, then you see that this is actually exactly-- let's see. So this is vl. That's an el. Sorry, v. l plus. All right. So this is our target equation. This is what we got for every entry. So let's just verify these two, verify that they are the same thing. It's just a reorganization. So how do you verify that? It's kind of like if you take the i, j, k coordinates, do you see that-- so the question is, what is the i, j, k coordinate of this guy is, right? So v tensor el tensor el-- the ijk coordinate is equal to-- it always has a vi there because vi is always there. But the jk coordinate of this el and el depends on-- so, basically, you have to-- the only possibility is that this is e-- so the only possibility that-- so, basically, this is-- I guess, maybe, one way to write this is that if you really do it, this is vi times el, the jth coordinate of el, and el the kth coordinate of el. And in what case is the jth coordinate of el and kth coordinate of el-- are both non-zero? The only case is that l is equal to j and l is equal to k. That's the only case that this can be non-zero. So that's why this is equal to vi times 1 when l is equal to j. l is equal to k. And the only way this can happen is that j is equal to k, when j is equal to k. Otherwise, it's going to be 0. So that's how you verify. I don't expect you to verify completely on the fly. But in some sense, the exact formula doesn't matter that much either way. You only need to have a formula that depends on the third movement of v and also some formula-- basically, you just need a formula for v. OK? So any questions so far? So now let's see how we use it. So how we use it is the following, and you can kind of see what kind of things we exactly need. So you need-- so now you look at an x. x is a mixture of Gaussians. z was only a single Gaussian. And you use the single Gaussian on the-- the single Gaussian has a building block to complete the moment of the mixture of Gaussians. And what you will do is that, OK, when condition i becomes a Gaussian and you apply the lemma, and you get 1 over k times sum over i from 1 to k, and then this is mu i tensor mu i tensor mu i because mu i is taking the place of v. And then you have the additional three terms-- mu i tensor el tensor el, and el tensor mu i tensor el plus el tensor el mu i. OK. So-- parentheses. OK, and so, basically, the third moment of x is a function of mu i's. It's still a little bit messy. So what you do is you say, I'm going to get rid of all of these terms by using the first moment, the first in the moment of x. So what you do is that you first reorganize a little bit. You get this somewhat cleanly-looking term, mu i tensor mu i tensor mu i. And then you switch the k with the two sums. For the rest of three terms, you get sum over l from 1 to d, 1 over k times sum over i from 1 to d mu i tensor el tensor el. And you have the two other terms which-- I guess in theory. You can imagine what they look like. They just are permutations. They are rotations of these terms in some changing order. And now this one becomes the first moment of x. So you get 1 over k. You will get the same-- this is i, sorry. This is i-- i plus something that depends on the first moment of x. So what does this mean? It's that you can move those three things to the left-hand side. So, basically, this means that we can compute this tensor from the third moment and the first moment. So that's basically our interface. Once you have this tensor, then the next step will be-- next step, so we'll go from here. So from this thing-- I guess let me write this. I think I should have introduced this, too, mu i's. And just for the notational purpose, so a to tensor 3 is just a shorthand for a tensor a tensor a. So, basically, what this whole computation is saying is that now you can compute this expectation-- sum of the third moment of mu i, and then you need to design an algorithm to compute the mu i's from this. And if you can do this question mark, then you're done. The whole thing is solved because you can first use the moment to complete the third tensor for mu i's. And then you can run this algorithm. There are actually some cleaner ways to deal with this. You don't have to deal with these additional terms in order to get-- there are some other ways to get this exact set of tensors as directly in a cleaner way. But that requires a lot of other machinery. So that's why I'm only using this relatively brute-force way to get a circular tensor. But the point is that you can always get something like this. So now the problem becomes this so-called tensor decomposition problem. So, abstractly speaking, this tensor decomposition problem is something like you-- so, abstractly, you have a sequence of vectors, a1 up to an. These are all in dimension-- ak-- in our dimension d. So these are unknown. And what you're given is a vector that looks like this from 1 to k. And then your goal is to reconstruct ai's. And you can also ask about this-- for different orders of tensors, you can also ask the same question. So questions-- also, for example, you can have some other rth order tensor for some r that is bigger-- possibly bigger than 3. And it turns out that you can also get the fourth-order tensor-- the fourth-order power from this moment method. You can take the fourth moment of a beta, and you can get ax to tensor 4 with some rearrangement like we have done. So, basically, this is a kind of interface. It's where you basically reduce the moment. You reduce the [INAUDIBLE] problem to the so-called tensor decomposition problem. And this tensor decomposition problem also has certain-- somewhat kind of like a-- let me also introduce some notions for this, so-- notations. So the rank of the tensor-- so some basic notion for the rank-- so I guess let's say a tensor b tensor c is a rank-1 tensor. This is the definition of a rank-1 tensor. And then the rank of a tensor k-- tensor T-- is the minimum k such that T can be written as a sum of rank-1, a sum of k rank-1 tensors. Sometimes, this is also called CP decomposition. So in some sense, the reason why this is called decomposition is that you observe this-- some of these rank-1 tensors, you want to decompose it into components. And each component is rank-1. And this question is also sometimes called CP decomposition because there are some other decompositions for tensors that could also be meaningful in other cases. But actually, it's also fine to just call it tensor decomposition because this is the most popular decomposition for tensors. OK. So I guess now it becomes a very modularized question. It's an algorithmic question where, how do you figure out the components from-- given a lower tensor, how do you figure out the lower components? So what I'm going to do is that I'm going to basically list some of the existing results but not really talking about details, because, actually, what happens in this area-- I think this area becomes kind of very popular around 2013, 2012. In the very beginning, I think, a few papers kind of lay out the framework for this whole thing. So how do you compute a moment? How do you convert it into a tensor decomposition problem? And then those papers provide some somewhat easy tensor decomposition problems, or they actually invoke some of the existing tensor decomposition problems in those early papers. And then this field, somewhat kind of like-- because this question becomes two parts. One part is about, how do you do the moment? How do you turn the moment into a tensor? And then the second part is, how do you decompose the tensor? So so people have-- there are a lot of papers involving some of my works as well. But there are actually a lot of works that tries to understand how do you decompose all different kinds of tensors, under what conditions you can decompose. So what I'm going to do is I'm going to list a few conditions that you can decompose these tensors computationally efficiently. And those conditions, you will turn into a condition for the upstream problem. For example, in the mixture of Gaussians problem, you're going to have some conditions. So just to set up kind of the basis, let me see where I wrote this. Somehow I didn't notice this. So maybe the number 0 is that, in the most general case, in the worst case in the more TCS language-- so we're calling it the worst case-- or in the most general case, this problem is not solvable. So finding the ai's are computationally hard. Actually, there are several layers here as well if you want to discuss the details. In the very worst case, actually, the ai's are not unique. You don't have a unique decomposition. And when the decomposition is unique, there are also cases where the decomposition is unique, but you cannot find them in a computationally efficient way. I think there's a question. So [INAUDIBLE] you can put [INAUDIBLE]?? So if you take 3, you replace 3 to be 2, then it's pretty much like symmetric. This here is symmetric, but you can also make it asymmetric. But, yes, you are right. It's basically linear algebraic stuff like FE. And this is a very good question. So I think, in some sense, as you will see in some of these questions below, in some aspect, the tensor decomposition is kind of closed to matrix decomposition. But there is one fundamental difference. So that fundamental difference is what enables-- that makes these kind of tools powerful but also challenging. It's powerful in the sense that it's fundamentally powerful because here, there is no rotational environment. I guess this no rotational environment, also, you have to interpret it in a careful way. So what I mean is that some of ai tensor 3 is not the same as some of the rotation of ai tensor 3. However, this is true for matrices. So if you have some of ai transposed, this is the same as some of r times ai. r is the rotation matrices. I guess it depends on how you rotate it. My bad. I think this-- how do I say this? I probably shouldn't say this on the fly without thinking about what's the best way to. I guess, technically, I should rotate on the right. So maybe-- let me not make it precise. But I think maybe one thing to realize is that if you have matrices, you have a times a transposed, something like this, which is kind of like a sum of ai ai transposed if you put all the ai's as columns of calculating So then this is equal to a times r times r a transposed if r is a rotation matrix. And you just cannot do this for the tensors that often. But what happens here is that if you permute-- if you have ai, and you permute it, permuted indices to ai prime, where ai primes are just permutations of ai, then the resulting sum, the third tensor, is still the same. So you only have the rotation symmetry, but no-- you only have permutation symmetry, but no rotation symmetry. And this actually makes it somewhat powerful because, in many cases, this is the case. For a mixture of Gaussians, you can permute all the centers, and there still is the same Gaussian. But you cannot rotate the coordinate systems to make the same-- you cannot rotate the-- at least you cannot take linear combinations of the centers to still maintain the same nature of Gaussian. And I think this also applies to neural networks. I think, for neural networks, you have the permutation symmetry where you can permute the neurons in intermediate layers, and also the associated edges. And you can still maintain the functionality of the neural network exactly the same. But you cannot do arbitrary rotations in it because you have the nonlinearity with activations. So, yeah, but I guess this part is supposed to be somewhat abstract because if you see a lot of math, then you can probably understand this a little more better. But anyway, so there are some fundamental differences between this and linear algebra. So that's why tensor decompositions becomes difficult-- especially the work is. OK. Going back to the list of questions, as I said, the starting point is in the general case, you cannot hope to do anything. But there are many cases where you can do something. So the easiest case is the orthogonal case. So orthogonal case means that if a1 up to ak are orthogonal-- and in this case, actually, this is the closest to the eigenvector case. So here you can say that then ai is actually the global-- each of these ai-- each of ai is the global minimizer. There are multiple global minimizers. So that's why each of them is a global minimizer-- maximizer, actually-- of this objective function where you maximize the l2 norm-- maximize this tensor picked by a rank-1 tensor. So I guess if you're not familiar with the notation, then what I really mean is that take this sum of Tijk times xi xj xk. So this is the extension of the quadratic form for matrices. So suppose you have a matrix. Then this is the quadratic form. And for tensor, this is this tensor form. So eigenvectors can be defined in this way if you change the tensor to the matrix because an eigenvector is what's maximizing the quadratic form for the matrix. So in some sense, in this sense, the components are some kind of eigenvector. And then you can find this. So this is an interesting property. So this is saying that ai is kind of like eigenvectors of T. And also, we can find it. It's not trivial to find it. But we can find ai's in polynomial time. And actually, the way to find it is that you try to solve this optimization. And it's one way to find it is that you try to solve this optimization problem back when you use that. So that's one way, one case. And another case is that-- a more general case that you can have is the independent case. So it turns out that if a1 up to ak are linearly independent, then this is also a good case. You can find this in polynomial time. I think the algorithm is called Jenrich's algo. I'm not going to describe all of this algorithm, just because it will take too much time. And then, sometimes, these are things that you can-- as long as you have some kind of basic knowledge, you can search over the literature, and there are many papers about this. But these are-- so 1 and 2 are both about so-called undercomplete case. 1, 2 are the so-called undercomplete case, which just really means that k, the number of components, is less than d. You can see that number 1 and number 2 can only happen when k is less than d because if k is bigger than d, there is no way that a1 up to an are linearly independent. It's because your number of components is bigger than dimension. So they cannot be linearly independent. But actually, you can also do this for overcomplete case. Overcomplete case are still possible-- are still possible in certain cases. So there are several different ways to deal with overcomplete case, which means k is bigger than d. So the first one is that you can look at higher-order tensors. So you can say that suppose a1 tensor 2 up to ak tensor 2 are independent. This is a much relaxed condition that a1 up to ak are linearly independent because now you have a higher dimension. So now this only requires k needs to be less than d squared to make this possible to happen. And suppose this is true. Then you can just replace ai by ai tensor 2. So you can recover ai from the sixth-order tensor. So you recover from a1 tensor power 2 to the tensor power 3, i from 1 to k, which is still the same as the sixth-order tensor. And how do you do it? You just invoke the third-order tensor on ai to the power 2. And then, after you get ai to the power 2, you can get ai by just taking the square root. So this relaxes the restriction on the k but with the cost of estimating the sixth moment, because how do you get this? This is the thing with r to the d to the sixth. So you have to somehow do something with the sixth moment. And it will be less simple and efficient. And, well, another slightly clever way to do this is that you can do fourth-order tensor with the same condition. So you say that-- fourth-order generic tensor. And what does generic tensor really mean? It means that you exclude-- excluding algebraic set of measure 0. So you exclude a small set of-- a measured 0 set of tensors. And except those kind of tensors, you can do this. And this is saying that when k is less than d squared, you can recover ai from the fourth tensor, right? So before, if you do a trivial reduction, you get the third-- you need to use the sixth-order tensor. But now you only have to use the fourth-order tensor. And this algorithm is called FOOBI. And you can also have a robust version of this. This algorithm by itself is not robust. You can also have robust versions of this. I guess let me not write down these references. I'll add the references later, I guess. If I could just get the initials, I think these are some references like this, where you can get a robust version of these algorithms. And if you want to be more ambitious-- so you want to say, that I want to even deal with third-order tensor, then what you can do is you can say you can have random tensors. And by random, it means that if you assume ai's are randomly generated unit vectors. I guess whether it's unit vectors is not that important. But for convenience, let's say they are all unit vectors, and they're all randomly distributed on a sphere. And then, for even third-order tensor, k can be as large as d to 1.5. So you can have kind of overcomplete case even with third-order tensor. And there are some references here, which, I guess, I'll add it to the notes eventually. OK. OK cool. So this is just a very quick list, kind of probably a little boring list of references. But I guess you see the rough idea, right? So you can, for various conditions for the component ai's, you can have various kind of algorithms and different results. So, typically, if you have more restrictions on ai's, you get stronger results, right? So the strongest one would be you assume they are random. Then you can even decompose overcomplete answers when the order is only 3. But if you don't have that strong assumption, you have to go with the fourth-order tensor or even sixth-other tensor if you don't use the right [INAUDIBLE].. So this is basically what's going on in this area. And you can see, there are many, many papers that deal with different kind of setups. So I will add some references to the lecture notes. But generally, this is something you can kind of search on internet. And they are just-- before we conclude this part, there are other latent variables that can be done-- can be done by moment method, or method of moment, using the same strategy, where you first complete a moment, you turn it into a tensor decomposition problem so you can do the so-called ICA, independent component analysis, you can do the hidden Markov models, and you can also do topic models. I think there are even more than this. And I'm just listing a few that are most prominent. So these are all viable models for unsupervised learning. And for each of these, you can try to compute certain kind of moments and rearrange your moments so that you get a tensor and then decompose the tensor to construct the true pattern. Any questions so far? What do you get if, say, for example, it's a third-order tensor? So you want to activate it based on [INAUDIBLE].. Right. I guess it would be more general [INAUDIBLE] tensor. It's more-- [INAUDIBLE] So [INAUDIBLE],, is there, say, [INAUDIBLE].. I don't [INAUDIBLE]. What is the first [INAUDIBLE]? I think-- let me-- maybe I didn't-- let me try to answer, and then you can clarify if I didn't answer the question. So I guess the flow just is something like you first start with the data. You compute some tensor, maybe this, or maybe fourth-order-- maybe I said fourth-- here. And, of course, you cannot compute this exactly. You compute this approximately. You have some error in estimating this fourth moment. And you know that if you don't have any error, then this will be something like some of ai to the tensor 4, i from 1 to k. And then you decompose. You get ai's. And I guess-- how does the dependency kind of-- so I guess one thing is whether it's overcomplete or undercomplete, right? So why does that matter? That matters because this k is-- what is k? In a mixture of Gaussians, k is number of mixtures. So if you can handle overcomplete tensor decomposition, that means that for the original problem, you can handle more than d mixture. The number of mixtures you can handle is more than dimension. And if you can only do undercomplete tensors, then your number of mixtures has to be less than the matrix. That's why people care about overcomplete tensors. My question is, [INAUDIBLE] expectation [INAUDIBLE] with [INAUDIBLE] larger k [INAUDIBLE].. With larger k? The k here is something fixed. It's not about-- so I guess there is another thing, which is k is the number of mixtures in our data. It's something fixed. Consider. So I guess maybe what you're asking is this empirical thing. So the real thing is that you work on this. And then you say this is approximately equal to the sum of ax to tensor 4. And then you decompose that approximate version. So you also need your algorithm, your decomposition algorithm, to be robust to some errors because you don't know exactly this thing, this lower tensor, exactly. You only know approximate version of it. Am I answering? I'm not answering the question? Go ahead. Maybe I'm not answering the right question. [INAUDIBLE] Right. [INAUDIBLE] This is the tensor decomposition. Right. You can think of tensor decomposition as a low-rank approximation for the tensors. Yes. So [INAUDIBLE]. So it's a [INAUDIBLE] best approximation [INAUDIBLE].. So all of these theorems so far I listed, they all work for approximate version, even though I didn't really talk about the approximate version yet. I didn't talk about approximation explicitly. So in some sense, the first-order bet is that even you don't have an approximation. You get exactly a low-rank tensor. You have to be able to decompose it. Even that's nontrivial, right? So for matrices, it's trivial because you just take SVD. But for tensors, it's not trivial. So that's why the first-order bet is to say, I get exact low-rank tensor, I can decompose it. And then the second question is the so-called robustness, which means that you get approximately low-rank tensor, how do you decompose it? I think all of these algorithms, I think, are robust. And there are some robust version of them. And typically, if you don't care about the optimal sample efficiency, then they're all robust just for trivial reasons. But if you really care about exactly how many samples and how robust they are, it becomes a little tricky because you have to talk about sample efficiency, how does the concentration work, so. [INAUDIBLE] find the largest [INAUDIBLE].. Yeah, you can kind of think of the ai's as the largest eigenvectors. Largest eigenvectors. Yes. You can roughly think of that, yeah. OK? That's good? OK, cool. OK, sounds good. So I guess-- OK, cool. So then I'm going to move on to the last subtopic in this course, I guess. It's still about unsupervised learning, but it's about a slightly different type of unsupervised learning, which is more like class III. And you can see that we are still doing spectral methods. We're still doing some kind of spectral decomposition. But it's decomposing a slightly a different way. I guess you will see once I formulate a problem, and then you can see that before-- with the tensor method, you are building some pairwise information between the coordinates-- or threewise information between the coordinates of the data, right? So here, from now on, I'm going to talk about a different type of approach where you build pairwise information between the data points. And then you do something on top of that. So I guess I'll specify more clearly. So spectral clustering-- So I'm going to discuss, actually, a bunch of different algorithmal setups under this broad framework. This whole spectral cluster kind of framework, I think, is proposed by Shi and Malik around 2000. I think also Andrew Ng, Michael Jordan, and Weiss in 2001. Maybe this is 2001 and this is 2000. I don't have the references in the lecture notes. So it has been, like, 20 years old. So I'm going to kind of discuss a bunch of classical things about this. And also, next lecture, I'm going to talk about my own work, which kind of built on top of this to get it to a deep learning case-- to extend it to the deep learning case. So the general idea is that suppose you have-- so we are given n data points. Let's call them x 1 up to x n. And let's say we are given-- for the moment, we are given a similarity matrix. And don't ask me how to get this. Let's just assume that we have a similarity matrix G which is of dimension n by n. Actually, it's going to be a problem to construct a similarity matrix to some extent. But for the moment, I say we will have this. In some cases, we do have this similarity matrix and this G, where each of these entries of this matrix is doing some similarity-- is capturing a similarity between two data points, x i and x j. Here you can interpret this as similarity or something like-- or just generally some matrix that captures some relationship between data points. I think it's reasonable to think of them as similarity. And the larger they are, the more similar, I say. But this is not that important. So I guess you can see that this is what I call the pairwise information between the data points but not pairwise information between the coordinates. Actually, if you do-- in certain cases, they are kind of the same. But in some other cases, they are not the same. So, for example, one example could be that you have xi's are images. And then rho xi xj measures the semantic similarity of the two images. How do you get this? I think it's a little bit kind of tricky because typically, you cannot just take an l2 norm to match the semantic similarity because there could be two images that looks pretty different but are semantically similar. But for the moment, let's assume we're given such a matrix, such a similarity matrix. Example 2, which is probably more kind of like classical usage of these kind of models, so where you can say, think of x i is our users of social network, and rho of x i x j is equal to 1 if they are friends, like on Facebook, I say. And when they are friends, it means that they share some kind of similarities, maybe similarity in jobs or interests or some other things. So you can think of this as a similarity measure between two users. And eventually, I want to classify-- in this case, you want to-- eventually, you're going to classify the users into groups. So you want to say, I can detect hidden communities between users from this unlabeled graph. And so basically, the goal is to do some clustering with some kind of clustering-- I guess maybe I should just-- just clustering, clustering the data points. I guess, in the social network example, maybe you have all of these users where you have, let's say, so many users. And there's some friendship relationship between them-- something like this, maybe. And then what you want to do is you want to detect some so-called hidden community. So, for example, you can say this is a cluster. This is another cluster. And maybe this cluster corresponds to people at Stanford, and this cluster corresponds to people at Berkeley. And so, of course, between Stanford students you have more connections, and between Berkeley students, they have more connections. And there are some connections across the groups and so forth. And in this case-- and also, for example, for this example 2, you can think of this, also, G, as a graph. I think, even in general case, you can view G as a graph. But it will be a weighted graph. Here G, in this social network case, G is binary because Gij is binary. So you can view G as a graph. And Gij is an edge. And your goal is to kind of partition-- there are many ways to say what your goal is. So you can say you are clustering data points, or you can say you are partitioning the graph into different parts so that each part has more connections. Within each part, you have more connections compared to across different parts. So in some sense, you can all view it as partitioning the graph into kind of components that are separated from each other, that separate from each other, to some extent. There's no way you can completely decompose that into completely disjoint parts. But you can somewhat decompose them, kind of partition a graph into more or less disjoint parts. And so this is kind of the general type of setup. I'm going to kind of discuss probably one or two instantiations about this. So I guess the general theme is the following. So I feel like this is a pretty deep kind of observation in math. And the general kind of way to think about-- to say this is that eigendecomposition of this graph G really relates a lot to the graph partitioning. So, again, decomposition of this adjacency matrix G-- here, by G, I mean an adjacency matrix-- relate very well to the graph partition problem. So you see that in all of the examples I'm going to give, the main approach is to do some eigendecomposition. And actually, sometimes, it's not eigendecomposition of T. It's eigendecomposition of some transformation of T. But the key point is that eigendecomposition seems to relate so much to partitioning and clustering. And it's not that obvious. But eigendecomposition is a very linear algebra thing. And graph partition is a very combinatorial thing. And this is why it's kind of useful because when you deal with combinatorial stuff, right, typically, the way I might kind of-- I'm not a really combinatorics person. But my way to think about it is that many combinatorial stuff, once you can relate it to algebraic or linear algebraic or other kind of polynomials, then you get exposed to different type of tools. And you can do, sometimes, a lot more things than you expected. So this is the general thing. And we're going to see probably two or three examples to see why this is the case. So now I'm going to do something more concrete. So this is called a stochastic model. This is a very concrete setup where you can do math, and you can say what-- you can instantiate what I mean clearly. So the stochastic block model-- I can just abbreviate it to SBM-- and so G is assumed to be generated randomly from two-- sometimes it can be more, but I'm doing only two groups-- two hidden communities or groups. So the setting would be something like you have n vertices or n users. And you assume that there are two hidden groups, S and S bar. And this is the partition, meaning S and S bar are disjoint. And then you assume that if you are from-- two users are from the same hidden community, then they are more likely to be connected via an edge. So if i and j are both from S, or i and j are both from S bar, then the probability that Gij is 1 with probability p and 0 with probability 1 minus p. And then if i and j and otherwise-- if i and j are from different-- otherwise, which means that they are from a different community-- then Gij is 1 with probability q and 0 with probability 1 minus q. And here, importantly, p is much larger than q. Maybe that's much larger for the moment. How much larger? We'll quantify it in a moment. But you need p to be larger than q. I guess maybe I'll just write larger but not much larger. So, basically, from the same hidden group, you have a higher chance to be connected by an edge compared to from a different kind of group. If you draw this, I guess-- I don't know how to draw something, a random graph. But I think you can think of there is an S, and there is an S bar, some edges. And then, if p is something probably close to 1, you're going to have something like this. Within the group, you have high probability to connect each other. And across the group, you have some sparse edges, maybe just some little edges. OK. And now the goal becomes-- so the goal is to recover S and S bar-- if you recover S, you can recover S bar-- from the graph G. All right. So this is a well-defined data generation model. And basically, want to discover the hidden groups where you want to do the clustering. And our approach is going to be eigendecomposition. Decomposition. So maybe, before talking about eigendecomposition, for some extreme cases, you don't have to do eigendecomposition, right? So suppose-- let's just do some kind of somewhat trivial warmup. So suppose p is 0.5 and q is 0. Then you don't have to do any kind of-- I think, almost, you don't have to do anything, because you're going to see two disconnected parts, right? So if p is 0.5, and q is 0, you basically have some S and S bar. And you have some ideas-- not complete connections. You have some ideas here. And then there is a clear two subgraphs. You can just basically kind of-- for example, you can say, I start from this. I look at all my neighbors and then put them all in S. And then-- because if you see the edge, you know that they are from the same group, right, because if they're not from the same group, you have zero chance to see an edge, al right? So, basically, you just need to see all the points you can reach from this single point, and you've got all of these three points. And then you declare that to be S. And the same thing, you can do it for the other. You can do some kind of-- you can just try to-- does that make sense? I saw some confusions. I don't know. Basically, the algorithm I'm going to do is the following. I start with a node and then see what this node can reduce to. And I put it into my set. And then I do this repeatedly to see what other nodes I can reach to. And at some point, I reach the boundary. I reach a closure. I cannot reach any new nodes. And then I declare this to be S. And the rest of them, I declare to be S bar. That would work pretty reasonably well for p is 0.5 and q is 0. And that's just because, first of all, you don't have any false positive because all the nodes you discover should belong to the same group because if-- and secondly, I think you can also try to show that-- you can find all the nodes because if somebody is in your same group, it should connect to someone, someone you know. This is the so-called small world phenomenon, right? If this other user is from the same group, they should be connected with you by some paths, some path. So anyway, but you can see, even for me to convince you this algorithm is working for p is 0.5 and q is 0 is not that trivial. You have to do something right. And this is a combinatorial algorithm. And what we're going to do is that we are going to do a more linear algebraic type of algorithm. And you can see everything becomes kind of even clearer, and this is a more powerful algorithm. And you don't need this combinatorial reasoning. So what do we do? So we basically just do eigendecomposition. And as a warmup, what we're going to do is that we're going to do eigendecomposition-- because how do I simplify eigendecomposition? What's the right acronym for eigendecomposition? Eigendecomposition for G bar, which is the expectation for G. So clearly-- so what is G bar? G bar is the expectation of G. So you have a weight. And a weight is the expectation of-- just the expectation of Gij. So clearly, you don't have this phi bar. But just for the starters, let's look at this expanded version. And what is expectation? What is this G bar ij? This is going to be equal to p if i and j are from the same class, from the same community and equal to q otherwise. So that means that, basically, if you look at this G bar-- so suppose this is the indices for S. This is the indices for S bar. This is indices for S, and this is for S bar. So when you have both i and j from S, you will get P. So you get p, p like this. And here you get q So this is G bar. And my claim is that in this case, suppose you have the axis to G bar. So the top eigenvector of G bar is the L1 vector. And the second eigenvector of G bar is interesting-- is this vector where you have 1's on the coordinate in S and minus 1 on the coordinate in S bar. So, basically, if you've got the second eigenvector of G bar, you've solved the problem because you can just read off the community membership from this eigenvector. That's the k. OK. So it sounds a little kind of interesting, right? The proof-- I guess, what's the intuition here? So I guess the intuition probably comes from the proof. So let's first do number 1. Number one is almost always true for any finite cases. It doesn't even have to be such a special G bar. So what you do is you just say you get G times L1 vector. And what is G times L1 vector? Basically, you multiply G with L1 vector here, something like this. And you modify it. Basically, you are just the looking at the low sum, looking at some of the entries in each of the rows. That's what's G times 1. So what's the sum of the entries of each of the rows? The sum of the first row is something like p times n over 2 plus q times n over 2 because there are n over 2 entries with p and n over 2 entries with value q. And every row has the same thing. So this is equal to-- basically simplify this-- p plus q over 2 times n times L1 vector. So you can kind of see that 1 is the top eigenvector. Actually, even for more general-- for general kind of graphs, weighted graphs. So for any matrix with fixed row sum or for any graph with a so-called uniform degree. The degree of a graph is really, literally, the row sum of the adjacency matrix, so how many items do you have connecting to each of the vertices, that's basically the row sum. So if all the degree of all the vertices is the same, that means the row sum of the adjacency matrix is constant. And that means that the L1 vector is the top eigenvector. So this is a very interesting fact. So, basically, the top eigenvector doesn't really tell you much. You have to go to the second eigenvector to see the interesting thing. So now let's look at a second eigenvector. So what I'm going to do is-- there are many ways to evaluate whether this-- I think let's call this vector u. There are many ways to verify u is an eigenvector. You can directly multiply it, and see what's the eigenvalue. I think, probably, the most intuitive way to think about it is the following. So let's look at G bar, subtract from G bar-- a background thing. [INAUDIBLE] Where? Which one you were talking about? Sorry. The expression [INAUDIBLE]. Oh, sure, sure. But the negative of S is n over 2. Oh, I guess, sorry, I didn't assume that-- my bad. I didn't assume that this is an equal partition. I should assume that. So this is also possibly-- assume, also assume S is n over 2. S bar is n over 2. If they are not equally weighted, I think you have to do a little bit other things to deal with it-- not super important, but yeah. So if the S and S bar are not exactly same, I think L1 vector is not the eigenvector anymore. So you have to re-weight-- you have to kind of massage this matrix G a little bit to make it still true. But we'll get to that in a moment in next section, I guess. But so far, OK. So let's assume S and S bar are balanced. And now how do we see that the second eigenvector is this vector u that we want to look at, we're looking for? So the way to think about it is that you subtract from G bar this background matrix, 1, 1 transposed, times q. So, basically, the structure q from-- every entry of this matrix. 1, 1 transposed, times q is really just a matrix with all entries being q. And then what's left is this matrix. Let's call it-- let's say r is equal to p minus q. You get r-- something like this. So this is S. This is S, S bar, S bar. And here we have 0. OK? So now you can see that this matrix becomes nice because it's a block diagonal matrix. So for this matrix, if you want to verify, maybe let's call this matrix G prime. So we can verify G prime times u is equal to u. And how do you verify it's going to multiply off u? How do you verify this? This is just because you can do this for the two blocks separately. So this is really just-- so I'm going to go r, r times 1, 1, minus 1, minus 1. All right. So you do these two things separately, and basically, you get r times n over 2 for the first of two coordinates. And you get minus r times n over 2 for the second set of coordinates. So this is r times over 2 times u itself. And also, u is orthogonal to L1 vector just because half of them are positive, half of them negative. So you take the inner product. It becomes 0. So that's why, if you even look at G bar times u, this is equal to G prime times u because the background you subtract off is orthogonal to u. So that's why this is equal to r times n over 2 times u, which is p minus q over 2 times n times u. OK. So that's why u has eigenvalue p minus q over 2 times n. So this is the-- so I think the main point is that after you subtract off this background thing, then this G prime is block diagonal. And this means that the eigenvector aligns with the blocks. I think this is kind of the fundamental things that we are looking for. Maybe, just to generalize this to make it look a little bit more convincing, so suppose you have a matrix A which looks like this. Suppose you have some-- and just 1, 1, 1 here in this block, and you have a lot of 1's. And you have a lot of 1's here. Suppose you have three blocks now instead of two blocks. Then because you have block diagonals, so we know that for every block, you can do your own thing, right? So then, if you look at the eigenvectors, you can see that-- you can see that this-- so I guess each of these three vectors are eigenvectors because you can do each of the blocks in a separate way, right? So I guess there is-- so, basically, you can say that-- so you can say that I'm going to choose the L1 vector for the first block and then 0 at all the other places. That's still eigenvector. And then, when you have this, then you have that-- if you have these three eigenvectors, then the rows-- if you look at every row here, so this is 0. This is 0. This is 1, right? So this row gives the cluster ID of the vertex. So each row gives us the cluster ID of vertex. So I think this is the fundamental intuition about why eigenvectors are useful for capturing the clustering structure in the graph. It's just because in the extreme case, when you have extreme clustering-- so every block, every subset-- in these three blocks, or three subsets, they have, really, just strong interconnections, and know any other cross-group connections. In that case, the eigenvector just strongly aligns with the block structure. And here I think-- but it was complex. What makes things a little bit more complex is that you have some background. You have some more things here and here-- so some random entries, small entries other places-- then it would change the-- it would elevate the entire matrix a little bit, right? But it wouldn't change the eigenspace fundamentally. That's pretty much the intuition. Any questions? So it seems like here you have the [INAUDIBLE] structurally. And then, as you have some permutation, [INAUDIBLE]---- Right. --you would have to [INAUDIBLE]. Right. Right. So how would you permute this? That's a great question. Actually, that's exactly why this is working because-- so the question is, what if you permute this, right? So if you permute it, it's kind of like you're permuting-- the eigenvector will permute accordingly. So suppose you have a-- I'm not sure whether that makes sense. So, for example, suppose you have-- you declare this part and this part to be the first block. And then this part and this part will be the second block. I think your eigenvectors will permute-- the coordinates of the eigenvector will permute accordingly. And that's why it's aligned with the-- that's why the alignment is maintained, and you can discover the hidden structure. OK. Sounds good. So I guess maybe another thing-- I'm not sure whether this is a confusion for you. It could be confusion, could it not? So here, the eigenvectors-- there's no negative values in this construction, right? But the reason why I didn't have negative values is because it makes it simpler because-- so, for example, let's say even this matrix, this vector, this is also an eigenvector because it's the sum of two eigenvectors. And all of these are going to have the same eigenvalue. So any combination of the eigenvalues is still an eigenvalue. That's how you get the negative values. And there's something special about L1 vectors because-- so here, in this example, in this A example, there is nothing special about L1 vectors because there is this background noise. But when you-- so what happens is that when you add kind of a background noise to it, then the L1 vectors becomes-- stands out. So here you have three eigenvectors that are equalizing. But when you-- an L1 vector is in a subspace of these three eigenvectors, right? So L1 vectors is indeed a linear combination of these three things. And when you add the background noise, the L1 vector direction stands out. And then you are left with two other directions which are still the same. And those two other directions will tell you the block structure. So maybe another way to think about this is if you have two blocks-- suppose you have two blocks. So if you don't have any background noise, then the eigenvectors will be 1, 1, 1, 0, 0, 0. These will be our two eigenvectors. But then, when you add it back on-- and you can represent these eigenvectors in two different ways. This is an eigenvector eigensystem. You could also write it like this, just because you have different ways to represent a two-dimensional subspace of eigenvectors. But when you add the background noise, then this one will stand out. So that's why you can only use this system to see it, but not here, because-- I'm not sure whether that make sense. No? So, basically, without adding the background noise, you have this direction, which is this 1, 1, 1, 0, 0, 0. And there's this direction-- 0, 0, 0, 1, 1, 1. And also, you can have this direction, which is the L1 thing, and this direction, which is the 1, 1, 1 minus 1, minus 1, minus 1. So they're both-- so you have these two different sets of coordinate systems. And when you add background noise, you're going to elevate, or you're going to increase the strength in this direction. But the subspace doesn't really change. This direction becomes the top eigenvector, and this becomes the second eigenvector. But fundamentally, nothing really changed that. I hope this only clarifies this, not confuses. OK. So I guess I'm running out of time. Let's see. So I guess I'll take two minutes to give a quick overview, wrap up, and give a quick overview of what we do next. So, basically, you can also actually-- if you really want, you can verify that G is actually equal to p plus q over 2 times 1, 1 transposed, plus p minus q over 2 times 1, u transposed. This is the eigendecomposition of this matrix. And-- OK, T bar. So next, what happens is that we only have access-- so in reality, we only have G. So what do we do? What we do is we just say the intuition is just that G is approximately equal to expectation of G in certain aspects. It's not true that every entry of G is close to every entry of expected G, because you take one entry, G is binary, and the expectation of G is p of q. There is no way they are close. But this is in terms of the spectrum. So, essentially, we want to show that, essentially-- even though we need a little trick to make this work nicely-- essentially, we want to show the operator norm, the difference between these two, is small. Then, even if you use G to do the decomposition-- so this means that decomposing G is the same-- is similar-- to decomposing expectation of G. That's pretty much it. And now you can see the concentration inequality that we discussed in the earlier lectures, in this course, becomes useful. So concretely, what you do is the following. So you write G is G minus expectation G plus expectation G. Expectation G is just G bar. So this is G minus expectation G plus p plus q over 2, 1, 1 transposed, plus p minus q over 2, 1, 1, transposed-- sorry, u, u transposed, right? So you also say that this part doesn't matter too much, right? It doesn't really change your eigenspectrum. To make it cleaner, what you can do is that you subtract this part because u is something you want to discover. The top eigenvector is something you already know. So we probably shouldn't ask the top eigenvector. You should just directly look for the second eigenvector. What you do is you move this to the left-hand side. So you get this p-- you look at this matrix. This is something you know because G is something you know. And then this matrix is equal to this plus p minus q over 2. So you can view this as a perturbation. And this is something you are really looking for. So, basically, you start from the left-hand side. You take an eigendecomposition. So you do eigendecomposition of G minus p plus q over 2, 1, 1 transposed. And you hope that the top eigenvector of this matrix is close to-- and the hope is the top eigenvector is close to u. That's basically our goal. And how do you make sure that this is true? The only thing-- so it suffices to show this G minus EG in terms of the spectrum norm is much, much smaller than-- the noise is much smaller than that signal in terms of the operator norm. And so, in some sense, you need some robustness of eigendecomposition. I guess I didn't really discuss any of the existing theorems. But essentially, if you have this, you can prove that eigenvectors of the sum of these two matrices is very similar to the eigenvector of one of these matrix. And this is called Davis-Kahan theorem. I guess I wouldn't have time to talk about all of this. But this intuitively makes sense. So if the noise is small enough in terms of the spectrum, the operator norm, then you get the signal. And so how do you get this? How do you show this is true? I think I'm going to discuss that in the beginning of the next lecture. It's essentially you just have to prove some concentration inequality using some of the tools we had in lecture 3 or 4 of this course. OK, any questions? What about multiple clusters where the noise was [INAUDIBLE] decomposition [INAUDIBLE]?? So if you have more clusters, the noise will hurt the entire spectrum. And it becomes a little more complex. So first of all, if you have no noise, then you can still prove that the eigenvectors are enough for you to recover the blocks. But this robustness thing will be a little tricky because now you have more eigenvectors. And the noise has an influence to each of them. And you have to, again, control some noise-to-signal ratio using a little more advanced techniques. But essentially, the-- yeah, I think it's just really the mathematical part that's a little bit more complicated. But fundamentally, it's doing the same thing. [INAUDIBLE] question. Is it sufficient to-- my impression is [INAUDIBLE] of this noise, this eigenvector, you must always do [INAUDIBLE]. The new second eigenvector, we want to show that it's close to u and what I guess the eigenvector of the new matrix is supposed to be. When we analyze the operator norm of G minus [INAUDIBLE] G vector, it feels like we're trying to bounds how much of all of the eigenvectors we need. Right. Right. Do we really need to do that? Or is there a way we can go around and just sort of argue about how the [INAUDIBLE]? Yeah. So I think that's a great question. I guess, just to rephrase your question, you are saying that we really need to say that G minus EG is small in all directions. So you just have to say that G minus EG is only not maxing up with the direction u. I think you do have to say, to some extent, G minus EG is small in all directions because if G minus EG is very, very big in one direction, even if that direction is completely orthogonal to u, then that direction will be a new top eigenvector, right, because you're talking about the max. But I think, like, how do you exactly measure this? There is still some room to negotiate. But you do have to, in some sense, say something about all directions of the noise. OK. OK. Thanks. I guess, see you Monday-- or Wednesday. |
Stanford_CS229M_Machine_Learning_Theory_Fall_2021 | Stanford_CS229M_Lecture_9_Covering_number_approach_Dudley_Theorem.txt | OK, I guess let's get started. This is working, right? Yeah. So I guess last time where we end up with was-- you view the function class F in some sense as equivalent to a set Q, right? So if you have a function class F, and you can define this Q to be the set of vectors of this form, basically the output vector, which is a vector in Rn. And here f is changing over the class F. So in some sense for the Rademacher complexity perspective, these two objects are not very different. So the empirical Rademacher complex of F only depends on Q. And also, we have talked about the case where you have a finite Q, a finite F. Sometimes, actually, even you have an infinite F, you can have finite Q in some cases, but not very typical. But in this case, what you can show is that you can have a Rademacher complexity bound. This is the so-called Massart Lemma. So you're saying that if your Q satisfies that-- this is at the end of the last lecture. So suppose for every vector in Q we have that this norm of the Q normalized by 1 over square root of n is less than m. Then we know that this quantity, which is essentially the Rademacher complexity of F, is bounded by this 2 times n squared times log of the size of Q over n. And if you translate this back to the function class, then you know that if F satisfies that for every f e f, this f is bounded in average by n, right? You can view this as the average size of f, but it's a quadratic mean of another f-- the later the mean. And then you have the Rademacher complexity of this function class F is bounded by 2m square, log of the size of f over n. So in this time, we won't deal with the case where you don't have finite hypothesis class, all right? So if you have infinite hypothesis class, infinite Q or f, then what you do? And what we're going to do is we're going to do a discretization, but now we are discretizing in the Q space or the outer space of f. So before, I think, in one of the previous lectures, we discretized in the parameter space. And now, we are going to discretize in this more fundamental space, the output space. Because, as we kind of argued, that output space is what's really fundamentally important. The parameterization is just something that influenced the output space, but if you have the same output space but different parameterization, actually, the function class are not different. So the parameterization are not the most fundamental thing here. So what we're going to do is we're going to discretize the output space. And we still have this idea of epsilon. This concept of epsilon cover. So now, we are going to cover the output space Q on output space of f by the so-called epsilon cover. As we recall the definition of epsilon cover-- so recall that the definition was that C is epsilon cover of Q. Now, I'm using-- I'm talking about epsilon cover of Q, but I just changed the variable. I think before, we called it epsilon cover of some other set. So with respect to some metric, rho, for any vector in Q, there exists a vector in C that covers it. And by covers it, it means that-- such that the distance between this vector is less than epsilon. And let me also define the so-called covering number, which is the quantity we are going to use very frequently. So the covering number of-- OK, there are several arguments. One thing is the target radius, the target radius epsilon, and also the set Q, and the metric rho. This is defined to be the minimum size of epsilon cover of Q with respect to rho. Right, so this is the minimum possible size of the covering. Sorry. There's a-- in some sense, you can use this covering number in actually two ways. One way is you talk about the covering of Q and the other way, you can talk about the covering of F, right? So even though I think the fundamental thing is about the Q, I think in the literature, if you read the paper, then, in most cases, people talk about covering of the function F, at least in many papers. So we're going to use that language, but they are essentially the same. So basically, let's first clarify. So if you do this for the covering of F, then it's the same thing. So if you have epsilon cover of the function class F, you just view F as a function class. So then it's saying that it satisfies that for every f in capital F, there exist f prime, such that rho f f prime is less than absolute. So it's just literally the same thing. And also, we're going to choose the rho to be the same for Q and F. So basically, what we're going to do is that we're going to choose rho between two vectors, wherein in the Q perspective, you choose this to be 1 over square root n times the L2 distance. Recall that both v and v prime are dimension-- eigenvector in space Rn. So this is basically our-- sorry. There's no square. So basically, this is a normalized version of the L2 distance. The reason we normalized 1 over square root n is just because this is more consistent. The normalization fundamentally doesn't matter, first of all. So whatever normalization you choose, it doesn't change the essence. And the reason why we choose a normalization here is just simply for consistency with the function space view, where you have our two functions. We would define our rho to be-- suppose you have two functions, f and f prime, and what's the distance between them? Recall that we only restrict our function on the finite set of points, z1 up to z10. So the typical definition of the distance would just be the L2 distance on the set of points. So it's just something like you look at the average difference between these two functions on zi's and then you take the quadratic average, and then you take basically the quadratic average of the difference between f and f prime on a set of zi's. And you can see that these are exactly the same rho, just a view-- you can view them in either the function space or you can view it in the vector space. And typically, people write this rho as rho2 Pn. So the reason-- I guess for those of you who are not familiar with, just to think of it an arbitrary kind of symbol to indicate this. But for those of you who are a little bit more familiar with some of these functionalities-- so I think the idea is that Pn-- this is the empirical distribution. Basically, a uniform over 1z up to zn. And L2 of Pn means that you have a L2 metric defined on this empirical distribution-- this uniform distribution on the sphere. But if you don't know where this come from, like no-- this is just a-- let's just treat it as an abstract symbol just because-- I'm going to use this symbol several times just for formality, but it really just means this. OK, so with this view, basically, as we have said, right, so you have F corresponds to Q and a function f corresponds to this vector fz1 up to fzn in Q. And is a one to one correspondence. Also, the rho corresponds to each other. So you can, in some sense, write this trivial kind of correspondence. If you look at the function space view with the metric rho, then the cover number is the same as you would output. In the output space, the vector space, and you use the metric, normalize L2 norm. And one of the reason why we normalize n by something that depends on n just because you have n dimension. And n is something that's changing. So in some sense, it makes sense to normalize by that. Because if you have a changing vector with changing dimension, sometimes it's hard to compare different cases. So that's why you want to have a norm that doesn't depend on dimensionality. And from now on, we're going to write the function space view that notation. We are going to write in the F notation, but in my mind, I'm always thinking about output space because that's just a vector space, which is much easier to think about. OK, and also, the formal kind of theorem will be stated in the function space, but-- by the way I proved it, I'm going to change to the Q just to make it more kind of explicit. And here, the theorem that kind of deal with-- in some sense, this is a kind of like trivial discretization. What we're going to do is that we're going to first discuss this and then have a more advanced discretization, which is called chaining. So the trivial version is the following, which is, in some sense, basically the same as-- like in similar, the same as what we have done in Lecture 3, but here, we are doing the function space. So let F be a family of functions from some space z to minus 1 and 1. So we assume these functions to be bounded between minus 1 and 1. And then for every epsilon larger than zero, you can show the following. So the Rademacher complexity is less than epsilon plus-- let me write it down and then interpret it. --log of the cover number with the radius epsilon over n. And we're going to show how to prove this, and when you show how to prove-- when we prove it, you will see that this is, in some sense, the discretization error. And this is, in some sense, from the Rademacher complexity of the finite epsilon cover. So we'll see this more clearly in the proof. So in some sense, the general idea is that you approximate. So the proof, the general idea is that you approximate F by an epsilon cover and-- maybe that's-- let's call it C, and then-- maybe let's not give it a name-- by epsilon cover. And then when you have the epsilon cover-- for the epsilon cover, you have a Rademacher complexity bound, and then you pay something because of the discretization or the approximation. OK. And when we prove it, as I said, I tend to kind of change it to the vector space view just because then you don't need all of those kind of function or jargon about function space. Let C be an epsilon cover of Q. Q is the output space. Well, Q is the same thing, right? So then-- let's say this is the size which is equal to the minimum covering number, right, which is just the same as we claimed of the function class. And now-- OK, now if you look at the Rademacher complexity of the function, as we claim that this is, in some sense, the same as the complexity of the output set-- and now, what you do is you say, I'm going to approximate v by the nearby point in the cover, right? So suppose you have the set Q and I have a vector v and I know that v is covered by something, right? You have an epsilon cover like this. You know that this point v is covered by some of this point, v prime, in the set C, right? Every point C-- recall that every point C cover a certain family of points, right? They can cover its neighbors in some radius. And you know that every point can be covered by some vector in C. And the vector v can be covered by v prime, let's say. So then you know that v and v prime, the distance is less than epsilon, and then you can approximate. So for every v and v prime in C-- and you know that distance is less than epsilon. And also, you can write v sigma, in some sense, just trivially as v prime sigma plus v minus v prime sigma, right? Maybe let's call this the z. So it's v prime sigma plus z times sigma, right? And what you know is that z is small because the distance-- well, so you know z and this distance. Recall that we are using a scaled L2 norm. So this is less than epsilon. This is what we know. Then what we know that z times sigma, you can use the-- I think this is one which-- this is Cauchy-Schwarz right, so the inner part of two vectors is less than the norm of the two vectors, the 2 norm of the two vectors. So this is less than square root n times epsilon times the norm of the sigma, which is n times epsilon, right? So basically, we know that this error term is less than epsilon by doing this. And then-- so now, we can go back to the Rademacher complexity. First, use this-- so this is just the less than expectation using a few things, right? So that's the epsilon, right? Because z, in a product with sigma, is less than epsilon. And this epsilon can go outside of all of those things because epsilon is a constant. So then you get plus epsilon. And here, what's the range of v prime? So v prime always has to be in C, right? There's no way we can-- this is our definition of v prime. v prime is the cover in C. So then I guess this is equality. And then this one, you can use the Massart Lemma. This is the complexity of the set C, the cover set C. Using Massart Lemma, you get square root 2log C over n plus epsilon. And we are done, right? C has this size. So this is just the square root 2log N, epsilon, F, L2Pn, over n plus epsilon. OK? So pretty simple. And any questions so far? OK. So now, let's talk about stronger theorem. And this is, in my opinion, a pretty deep theorem because, at least-- for me, I don't have much intuition about it, but, hopefully, after I show the proof, it's intuitive, but it's something non-trivial. And generally, this type of technique is called chaining, and there could be multiple ways to do this kind of chaining in different situations. So here, I'm-- here, the particular theorem is called Dudley theorem. Dudley theorem. So the theorem is saying that-- so let F be a family of functions from Z to R. So here, actually, I relaxed this event because this theorem is more general. It can work for even in functions that are not bounded. So the Rademacher complexity is bound by the following. Let me write it down. It doesn't look very intuitive in the beginning, but I will explain. So it's an integral. So the variable is epsilon. So you are integrating a function of epsilon from zero to infinity and you look at the covering number for different epsilons and you divided by n. So the integrand is square root of the log of the cover number over square root n. And the first time, it's not even clear whether this is a stronger theorem than before because it's not trivial to compare with the previous one. But actually, you can compare it if you do some work. So probably-- what I'm going to do is I'm going to show the proof and then I'm going to interpret this. Because I think from the proof, it's pretty obvious that you have a kind of a stronger statement. But if you just compare the form, it's not that trivial to compare. But from the proof, you can see that this is-- the proof technique is the extension of the previous proof technique, and you should kind of like-- it's pretty obvious that you should expect a stronger theorem. And then later, I'm going to compare them and also interpret this because this form by itself is still somewhat kind of hard to use, right? How do we know whether I can integrate something good out of this, right? So I'm going to give you several cases where you can integrate a good number out of this integration. So that's the fun. All right. So now, let's dive into the proof. So how do we prove this and what's intuition? So let's start with the intuition. The intuition is that-- this is actually probably one of the pretty technical proof in this course. So intuition is that you have this-- I'm thinking about whether I should draw a single figure. I've drawn a lot of figures on my lecture notes, but I think it's going to be challenging for the scribe notetakers to produce all of them in the notes. So I'm thinking if I should draw one. Yeah, maybe I'll draw multiple and let the scribe notetakers to figure out how to merge them if they want. So the intuition is-- let me draw this again. So you have this set Q, and what we have done was that you create a cover, an epsilon cover, right? It covers this, and every center is one point in C, and you want all of these balls to cover your set, right? And what we have done was that you have a vector v here and you say that I'm going to approximate v by v prime plus the distance. So basically, you approximate v by v prime plus the difference z. So this is all fine. The problem is that how do you-- so you have this formula. Let me just write again. So the tricky thing is that, how do you deal with this error, v times sigma, right? So what we did before was that we have a very brute force inequality saying that this is less than 2 norm of z times 2 norm of sigma over-- and when this can happen, this can happen only if z is perfectly correlated with sigma, which just cannot happen always, right? Because z is a vector, which is a difference between v and v prime. It could be correlated with sigma if your ball is-- so by the way, this ball is like-- I draw it like a ball, but this could be of different shape, right? Because if every-- everything is not really a ball, right? So suppose this is really just a-- including ball, then everything will become too trivial for us. So Q is a set and there is some metric defined on it, and this metric is potentially somewhat complicated, which we don't really know. The metric is-- sorry. Sorry, the metric is trivial, but the set itself could be complicated because you don't really know what a set looks like, right? It's the image of a function on some set of vectors, on a set of points, right? So this set is-- these are all balls, but the set itself could be kind of like weirdly shaped. So that's why this z may not always be correlated with sigma. So in the worst case, it can, but not always possible. So basically, the question is that, can we strengthen this inequality here? Why this has to be worst case? So if you think about this, right, what is the-- if you think about it, what is the sup expectation of the-- so basically, what you really care about. What you care about is the following. So you can take this-- so let me just write it down. Let me do it a little bit slowly so that-- so you care-- So you do this inequality. You first say that this is less than the expectation of the sup of the first term plus the expectation of the sup of the second term. This is because-- I guess we have claimed that expectation of sup A plus B is always less than expectation of sup of A plus expectation of sup of B. All right. So the first thing you can do is as follow. And then you care about this. And before-- as I said, we have a very worst case inequality for the inner product, but, actually, this point itself may not be that worst case, right? Because here, z is, in some sense, in this ball around v prime, right? So we have this ball v prime here, which is the ball, and the z is in this ball. So if this ball is not like a-- and sometimes this z is in the-- and you can create this-- you can make this cover of a certain shape so that it is in this ball. And sometimes this is the ball intersect with Q. If it's really a ball, I think you-- the worst case inequality is tight. But actually, you are intersecting this ball with the Q, and Q could be weirdly shaped. So if you look at this, then this one could still be possibly small because if this ball intersect with this Q, it's of a small complexity, right? So basically, the idea is that what you do is that-- for the first inner thing, you just do the log of the covering number. But for the second thing, you do another round of discretization. Because you don't want to say that z can be worst case. I want to say that z probably cannot be worst case. z have some structure. So I'm going to discretize it again. Sorry. I need-- how do I turn this off? OK. Wait, why am I-- why I'm having this? Sorry. And that. I'm not using my address, right? So everyone select-- everyone in Zoom meeting can hear me right? Could hear me right. Sorry, I forgot to take off the headphones. My bad. Class, can you hear me? OK, I hope you can hear me. OK, thank you. Thanks. OK, cool. Sorry. My bad. I forgot. Take off it. OK, so basically, the kind of idea is that this is still a Rademacher complexity of v v prime intersect with Q. And you can do another round of discretization for this set so that you get an even tighter inequality. So that's kind of the rough idea. So basically, you have nested layers of discretization to make it stronger and stronger. So that's the basic idea. And now, let's do a li-- let's make it a little more formal so that I can define something and explaining some more. So let's say we have-- so I guess maybe just to briefly draw this, another vessel. So what you do is you do another discretization of this yellow ball, and then you say that this z cannot be worst case. It has to be something like z can be approximated by this plus this. I'm not sure whether this is 2. I would draw a bigger figure, but basically, this point z is not-- you approximated z by its nearest neighbor, again, and then you look at the difference. And then you approximate difference by something else. I will draw this more formally in a moment. To do that, let's define epsilon0 as the sup over fsup over i. Max over i FCi. So this is just the maximum possible value that you can output. And you can see that this is just some preparation, which is almost trivial. So you can see that this is always bigger than this because each entry epsilon is bigger than each of the FCis, and this is equal to square root 1 over n the 2 norm of v square for every v and Q. So basically, epsilon0 is an upper bound of the entire set. You don't have to talk about any epsilon bigger than this because everything is in this ball of epsilon0. And now, I'm going to create this nested-- or this-- technically, it's not nested, but I think I've always thought about it as a nested family of discretizations, but technically, you don't really need a nested part. So let me draw this. OK, let me define things first. So I'm going to consider epsilon1 to be half times epsilon0. Epsilon2 is a quarter of epsilon0. So in general, epsilon j is 2 to the minus j, epsilon0. So these are the kind of like the radius from the epsilon cover. And let Cj be an epsilon j cover of the set Q. Of Q. So I have this family of epsilon covers. And intuitively, you can think of-- kind of think of epsilon j plus 1 cover-- Cj is nestled in C. Cj plus 1 is nested in Cj in some sense, but I don't-- but this is not necessary for the proof. And also, it's not the entire-- but not necessary. I just like to think like that just to give me some kind of intuition. So what's really happening-- if I draw this, what's really happening is that I have this set Q. Maybe I shouldn't draw a ball so that it's kind of more interesting. So this is the set Q. And there is ep-- biggest thing, which is the epsilon0, which covers everything. Let me now draw that. So if you use the epsilon 0 cover, then it's trivial because epsilon 0, you can just use a trivial cover to cover-- you just need one point to cover everything. So you just need the origin. Let's now draw that. Let's draw something, maybe epsilon 1. So what happens is that you use-- you have a very coarse-grained cover at the beginning. Something like this. All right. So this is your epsilon1. And I have a point. This is something really hard to draw. So I need to follow my notes exactly so that I don't have any issues with it. So I guess suppose I have a point, let's say, here. This is my point v that I want to approximate by the cover. So suppose this is the origin. So before, what I do is that-- maybe let's draw this v somewhere else. Sorry. Let me just draw v here. So this is-- let's call this u1. This is the closest point in the first level of the epsilon cover. So before, I just use u1 to approximate v. And now, what I'm going to do is that I'm going to first use u1 and then I consider the second level of the epsilon cover, which is of a smaller size, which is of the-- actually a size half. Whether this is-- by the way, this number 2 is nothing magical. You can make it something like 3 or 4. It's just for convenience. You just need a constant vector smaller at every level. So you have this, for example, right? So this is the second level. And what you do is you say I'm going to take the point u2 here. u2 is the nearest neighbor of v in the second level. And then what I'm going to do is I'm going to approximate v by u1 plus this vector between u2 and u1. Then I have a small distance between v and u2 right? And then I'm going to have the third level. Wait, I only draw three levels. So suppose in the third level what happens is that you have another thing here, and this is u3. And then you also consider this vector between u2 and u3. So basically, you approximate v by this red vector plus the green vector plus the yellow vector, and then you continue to do this until you get to v. So any questions so far? So basically, I'm going to approximate v by u1 plus u2 minus u1 plus u3 minus u2 until infinity. Because I'm going to have an infinite number of these coverings. It doesn't have to be exactly an infinite number of them. If you have fun doing enough, approximation can stop, but for simplicity, let's just say we have an infinite sequence of epsilon covers and you can do this. So more formally, what I'm going to do is that for every v and Q, like-- I guess this is just a formal definition. So its nearest neighbor-- nearest-- neighboring Cj, right? So that's why, by definition, because uj has to be covered that Cj-- so that's why-- so v has to be covered by Cj. So that's why v can-- the distance between v and uj is less than epsilon j, right? So in other words, 1 over square root n has to be Cj2 norm is less than epsilon j. And also, because epsilon j goes to 0, we know that uj goes to v eventually as j goes to infinity. As j goes to infinity. So that's why you can write this nested sum-- you can write this as u1 plus u2 minus u1 plus u3 minus u2, so and so forth, right? And if you like u0 to be 0, then you can write this as u1 minus u0 plus u2 minus u1. This is just to make it look nicer so that we can write it as sum. So this is sum of ui minus u1 minus 1 from 1 to infinity. And you can check the convergence if you really want, just so because I have this. So if you look at the partial sum, then it's um minus u0. And because um is partial sum, this goes to v as m goes to infinity. So this could cover it. And technically, you-- actually, if you really want to have a proof, you don't actually have to use infinite sum. I'm just trying to make it simpler. So you can just say I'm going to choose an m that is big enough, and then I have some small error at the end. That's also fine. OK, so-- and once we do this, and what-- as we kind of planned, so we have this kind of better and better approximation, right? So now, let's deal with each of these vector. So what we have is that expectation of the sup. This becomes the expectation times sum of ui minus ui minus 1 sigma. That's from 1 to infinity, right? And then you switch the sum with the sup. So you get expectation less than expectation sup. Maybe this takes-- sum. Sup. Right. And then this is equals to sum expectation of sup. OK, so-- and here, the constraint is that ui needs to be in Ci and u1 minus 1 needs to be Ci minus 1, right? So in some sense, this quantity-- each of this quantity is kind of like some kind of Rademacher complexity, but this is the finite class because u1 and u1 minus 1, no, are not arbitrary vectors. They have to come from a finite set. And then we just have to deal with-- we just have to see what's the Rademacher complexity of this set and then continue with the derivation. OK, so let's try to deal with each of these term. So we are trying to use Massart Lemma, all right? So Massart Lemma is dealing with-- is trying to deal with these kind of terms for finite set. So first of all, Ci-- so the combination of u-- u1 and u1 minus 1 are the variables, right? So they are in Ci times Ci minus 1. And Ci times Ci minus 1. The size is equal to the size of Ci times Ci minus 1. So this is something you can compute. Let's simplify that in a moment. And you can also have a-- By the way, for the Massart Lemma-- let's just go back to real quick. So I think we had this in the beginning. So for Massart Lemma, you have to check how large-- you have to check how large the vectors are, right? So this m doesn't matter, right? If all the vectors are super big, then your complexity will be big. And if all the vectors are extremely small, then your complexity will be small. So let's check what's the value of M here. So the value of M is the bound on the 2 norm of the vectors. The normalized 2 norm of the vectors, right? So basically, we need to check 1 over square root n times ui minus ui minus 2, 2 norm. How large this can be eventually. So this can be-- if you upper bound this, this is at most-- you just do a trivial triangle inequality and-- wait, sorry. My bad, my bad. You cannot do a triangle inequality. That would defeat the purpose. So what I'm going to do is that-- yeah, sorry. So you are going to do a slight more careful triangle inequality because you want to say u1 and u1 minus 1 are close, right? But u1 and u1 minus 1 themselves, each of them could be big if you look at this, right? So u1 and u2, as vectors, they are probably big, but their differences is small and smaller and smaller as you have bigger and bigger i's. And how do you control that? I think there is actually easy way. You just write this as u1 minus v because you can always compare with v. That's something you know, right? And then you use triangle inequality because both ui minus v is somewhat small and ui minus 1 v is somewhat small. And how small they are? So you know that the first term, 1 over square root n times ui minus v, this is less than epsilon i. And the first term-- and the second term is less than epsilon i minus 1. This is just by the definition of the epsilon cover, right? And epsilon i is 2 to the minus i times epsilon 0. So epsilon i is smaller than epsilon i minus 1. So by a factor of 2, this is actually 3 times epsilon i just because epsilon i minus 1 is 2 times bigger than epsilon i, OK? So with all of this preparation, we can apply the Massart Lemma. Then what you have is that sup is less than-- so we got square root 2 times the M square. This is the M, right? So you have M square, which is 3 epsilon i square, and then times the log of the covering number. And the covering number-- sorry, the log of the size of the set. The size of the set is Ci times Ci minus 1. And over n. All right. And let's try to simplify this a little bit. So you get 3 epsilon i outside over square root n, and you have square root log Ci plus log Ci minus 1. And times 2. And then you say that this is less than-- so Ci is probably bigger than-- is always bigger than Ci minus 1 because Ci is a more fine-grained epsilon cover discretization than Ci minus 1. So if you have more fine-grained, you should have more set, more points. This is just by definition. So you get-- you just bound Ci minus 1 by Ci. So you get 6 epsilon i over square root n times square root log Ci because we just replaced this term by log Ci. OK. So the constant doesn't really matter that much anyways. All right. So now, let's see what we have achieved, right? So we have found each of these term, and let's go back to this formula. So we just plug it in. So what we got is that-- so we got expectation sup 1 over n, v sigma. This is our target, which is less than the sum of this over i. i from 1 to infinity. 6 epsilon i over square root n, square root log Ci. So this is still not really an integration, right? So how do you turn this into integration, right? But this is kind of like a flavor of integration in which we have a lot of terms, right? In some sense. So how do we see this, right? So there are-- I think the way I see this is the following. So what was the-- maybe let me just write down what's the final formula you want to achieve. The final formula I want to achieve is-- recall that this is something like 12 times 1 over square root n times square root log N epsilon F, L2Pn, d epsilon. And this is the final formula we want to achieve. By the way, in some sense, actually, you don't really have to try this integration if you just care about applying this to some cases because this is enough for you to apply it. It's just like a disintegration. It looks so nice and it's kind of like-- it's a good interface in a mathematical sense. So how do we see these two are almost the same. And the way I see it is the following. So if you think about what this integration is-- so they have epsilon on this dimension. And let's plot the covering number. The covering number will be the lo-- this is the log covering number. Log-- maybe let's say square root-- square root log N, epsilon, F, L2Pn. So you plot this. And at some point, this covering number will be 1. And so the log of the covering number will be 0. This is just because when a radius is big enough, you can just use one thing to cover everything. So the log covering number can be 1, right? In particular, in our notation, when you read it, it's epsilon0, then your covering number becomes 1 and the log covering number becomes 0. So square root of that is also 0. And this covering number will go to infinity eventually as epsilon goes to 0 because you need more and more points in covers as you have more and more fine-grained covers. And you have this sequence of points. You have, for example, epsilon1 is here, right? So which is half of epsilon0. But let's look at epsilon i. So let me try to draw this exactly as my notes. So suppose this is epsilon i and-- if this is epsilon i, then half of it will be epsilon i plus 1 by definition. So this is epsilon i plus 1. And what is this value? This is the corresponding covering number, right? So this is square root log Ci, right? That's our notation, right? And now, let's compare these two quantities, right, this quality and this quality. This is what we are trying to link, right? So the quantity below is just the area under this curve, right? That's the definition. OK, I guess I'm ignoring the 1 over square root n, which is easy, right? So if you don't have the 1 over square root n or the 12 or the-- so this integral is just the error on the curve. And now, what is the finite sum? And if you look at the finite sum, then this epsilon-- if you look at this thing, the area of this triangle-- sorry, this is not a triangle. This is a rectangle. My bad. So the area of this rectangle then is-- the area-- the mass of the-- this is epsilon i minus epsilon i plus 1 times the height, which is square root log Ci. And epsilon i and epsilon i plus 1 are just the-- this is just the-- let me see what's the best-- I think this is epsilon i over 2 times log of Ci. And this is just the multiple of this term, right? So basically, the finite sum is, in some sense, just dealing with all of these rectangles and the integral is doing everything. So that's why the sum of the rectangles will be smaller than the integrals up to a constant factor. So basically, what you know is that you know epsilon i over 2 square root log Ci, because this is the-- this area is less than the integral of this part, right? This is less than the integral of this part. It's less than integral from epsilon i plus 1 to epsilon i and square root log N, epsilon, L2Pn, d epsilon, OK? And with this, we can just take sum over all i's. So you have sum of epsilon i over 2 square root log Ci is less than sum over i from 1 to infinity. Right. Sorry, this is not right. And now, you can see that each of these integral has the matching upper bound, lower bound. So you get-- this is from 0 to epsilon0 square root log N, epsilon, L2Pn, d epsilon. And the upper bound is still not infinity, but that doesn't really matter because this really literally equals to e. You can extend it to infinity because everything beyond epsilon 0-- bigger than epsilon 0 will be 0. So that's what we have, OK? So now, if you just multiply-- this is the essential thing, right? So with this inequality, you just link these two quantities. So I think you just have to work out the constant. I think there's a constant 2 there. So that's why you get from 6 to 12. So with this, you get expectation. And this is actually the Rademacher complexity of F is equal to this. It's less than 6 epsilon i over square root n, square root log Ci, and this is less than 12 times this integral. d epsilon. OK? So any questions? OK, great. OK, and I think from this figure, you can also kind of see that, in some sense, the essence here is that how fast epsilon goes to infinity. That's what's important here, right? Because if epsilon goes to infinity very fast, then your integration problem could be even infinity. So then you don't have any bound. And if this thing goes to infinity, like here, slower, then you get a variable. [INAUDIBLE] Yeah, so the question is-- I chose this level by a factor of 2, right? So it's 2 to the minus j times epsilon0. So what if I change that 2 to 3 or something like that? I never tried that myself, but I think very, very likely, you would just get a similar constant. Maybe you get better than 12, maybe you get worse than 12, but anyway, this constant is not that important for us. But I think it's very unlikely you can gain anything by-- that you can gain anything more than a constant. OK, so now, let's try to interpret this theorem a little bit more because, in some sense, this theorem-- this form is kind of hard to use, right? Because if I got a log covering number bound and-- OK, what's the intended use of this theorem? So the way to use this theorem is that you get some log covering number bound and then you do this integral. You get the Rademacher complexity, right? But it's kind of hard to use it because before you get the-- so you don't know how does this translation work explicitly. But, actually, the translation from the covering number to the Rademacher complexity is actually relatively simple, as I'll show. So this integration doesn't have-- you would see like a-- actually, you don't even need to-- I never compute this integral myself after I've done it once, in some sense. So here's how it works. So for the-- yeah, so basically, the question is, when this is finite square? So when this thing is finite? And when it's finite, what's the dependency, right? So and so forth. So when is finite? So I think there are several cases. Let's do kind of-- this is a case study. So of course, it depends on what the log covering number will be. So we have-- I have a few cases here. So a, if the covering number is exponential in epsilon phase of the form and is of the form-- something like 1 over epsilon to the power-- some power R. R is just a variable. Like a placeholder. So suppose it's exponential epsilon in the sense that 1 over epsilon is in a base. Then you can do this computation. You get-- and this equals to something like 1 over square root n, square root R log 1 over epsilon, right? So because you take a log covering number. And you will see that-- and you take the epsilon and you will see that the log 1 over epsilon integrate to some constant from zero to infinity. Oh, by the way, I think-- maybe I should say-- I forgot to take-- yes. There's a small thing-- I don't want to always integrate from zero to infinity because sometimes it's actually annoying. So-- I forgot to mention this. So let's assume the F is bounded between, let's say, minus 1 and 1 so that this integral only have to do-- you only have to do it between 0 and infinity. So epsilon0, let's say, is 1. Something like this. Or maybe a constant. So we only have to integrate between 0 to 1, let's say. Right. So this is just because you have a bounded function. After that, the log covering number become zero. And now, let's integrate. OK, going back to this, we integrate between 0 and 1 this log 1 over epsilon. And you will see that this log 1 over epsilon actually integrates to something of strictly a constant. So this will be just O. Maybe let's write one notation. I should write this like-- this is just on the order of square root R over n because the epsilon integrates to a constant. The dependency on epsilon is called, OK? So that's good. So you got this thing. And let's look at another case. So this is actually a case where the dependency on epsilon is very mild because it's not 1 over epsilon. So that's why it's pretty mild. But sometimes you never get-- you don't get this. So if N, epsilon F, L2Pn is of the form a to R over epsilon-- so now, the epsilon is the exponent, but-- yeah, and it's 1 over epsilon in the exponent. And in this case, if you look at this 1 over square root n integral log covering number, this will be 1 over square root n, R over epsilon-- square root R over epsilon log a, right? And this is still-- so d epsilon. And still, square root 1 over epsilon is integrated to 1. d epsilon. This is a constant. A universal constant you can compare. I guess we don't care about constants. So it's some constant. And this equals to-- so basically, if you ignore the log factor, this equals to square root R over n. So still of this form. I still got-- And now, it comes to the tricky thing, which is kind of like it's-- kind of on the boundary between what we can do and what we cannot do. So if this is of the form something like a to the R over epsilon squared-- so now, I have a even worse dependency on epsilon, right? So it's an exponent and also, it's 1 over epsilon square. So it goes to infinity as epsilon goes to 0 faster. And in this, case this becomes a little tricky because-- and but, actually, this is the most common case, right? If you really do the work, I don't really expect you to prove any generalization among yourself that often, but if you really do the work in many of the cases, you get this kind of covering number. And this is actually tricky because if you integrate the thing, what you get is that-- you take the log of this and you take square root. So what you get is-- maybe that's-- so you get square root R times 1 over epsilon times square root log a. Right, so this is d epsilon. So this is square root R, square root log a, square root n, 1 over epsilon d epsilon. And this thing is actually infinity. I guess this is because the-- how do you see this? Like 1 over epsilon integrates to log epsilon. And then log epsilon0 is infinity. So this goes to infinity too fast at zero so that it integrates to infinity. So this is actually-- no, this is not good news for us, right? So how do we-- but, actually, this can be fixed. How do we fix this? So this can be fixed by our improved version of Dudley's theorem. And this improved version, in some sense-- I'm not going to prove it, but it actually is kind of almost expected. So what you can show is that-- so basically, the idea is that you don't do the discretization all the way to 0. You do it until a certain level so that you can pay the worst case bound. So basically, you do it only to the level of alpha. So you bound it by this. I think there's a-- actually, I'm not sure whether there's a 2 here, but let me have the 2 here anyway so that-- for safety. I mean, the constant is not very important. So basically, when you do the integration, you are not integrating from zero to infinity. You are integrating from alpha to infinity. And below alpha, you just pay this alpha bound. So in some sense, you can see that this is an interpolation of the two bounds we had, right? So recall that one bound we had was-- this would first integrate brute forcing where we pay this epsilon, right? So this is just because we have a worst case bound for the epsilon error. And the other case we had is integration. We don't pay anything in the worst case. And this is basically saying that you do this nested or iterative discretization into alpha and then you pay the small error alpha at the end. And why this is useful? This is useful because it kind of avoid this tricky regime where you are very, very close to zero. So what you can do is that-- I think this theorem, you can probably prove it yourself. So I'm not going to show the proof. And if you use it-- so you can take alpha to be something like 1 over poly n. So something super, super small, right? And so that it's 4alpha. So that 4alpha is negligible. And so that here, on the right-hand side, you don't integrate to infinity. So basically, 4alpha is negligible. And the question is, what does the integration look like? So this is something like inverse poly n, which is negligible, and then you have square root R, square root log a, square root n, and integrate between and alpha and 1, and you have 1 over epsilon d epsilon. And unfortunately, this one, even though it goes to infinity as epsilon goes to 0-- as alpha go to 0, but this is actually something that depends on alpha very, very weakly. So this is this, right. I'm not sure why this is done that way. You know what my notation means, right? OK. Sometimes I think in different calculus book I see different notations for this. So sometimes I get confused. Now, this thing is really just the-- times-- this is like log 1, which is 0, and minus log alpha. So you got log 1 over alpha. And this is logarithmic in alpha. And the alpha is poly n. So this is logarithmic in n. So this is log n. So basically, eventually, this is still O to the square root R over square root n if you hide all the logarithmic function. OK, so in summary-- so the covering number of the form 1 over epsilon to the R, a to the R over epsilon, a to the R over epsilon squared all lead to some-- all leads to something like a Rademacher complexity bound of this form. And these are probably basically pretty much the only cases I know of that can lead to this. For example, if you suppose hypothetically your covering number is something like a to the alpha R, R over epsilon Q, I think this will break because-- so here, if this is epsilon cube-- and I think here, it's going to be epsilon to the 1.5. 1 over epsilon to 1.5. And when it's ep-- so maybe let's do a quick heuristic. So suppose this epsilon to the 1.5. And, of course, you still have to integrate from alpha so that you try to avoid the blow up, right? But it wouldn't be as effective because 1 over epsilon to the 1.5, the integration of this is, I think, 1 over square root epsilon instead of log epsilon. And I think this will be 1 minus 1 over square root alpha. I think it's-- see that? Yeah. I think there's a minus here, I think, right? So it's going to be something like 1 over square root n minus 1. Something like this. Maybe there's a half here. Anyway. Let's take another constant. I don't know what the constant is, but the problem is that this is not log alpha. This is 1 over square root alpha. And now, you cannot take alpha to be inverse poly n because if it's inverse poly n, you'll pay too much here. So then it's going to be a very tricky balance between this 4alpha term and this term, right? And I think-- at least I'm not aware of any cases where you can balance them in a nice way so that you get still a good bound. I think it's going to be probably not even possible. But on the other hand, for the case when you have this thing, right? This is log 1 over alpha here. So the balance is tricky. It's kind of like you have a free lunch if you pay a log factor. So it's almost always possible, right? So that's the difference. Right. And actually, most typically, you're going to get some covering number of this form. That's the most typical case. OK, any questions? OK, so now-- I have 15 minutes today. So the rough plan for the rest of the 15 minutes and the next lecture is that we are going to talk about covering upper bound for linear models and deep nets, and those will imply Rademacher complexity bounds. And I think today, I'm going to talk about linear models. But for linear models, I'm not going to give you the proof because I think the proof is a little bit too technical. In most of the cases, you wouldn't need to prove it yourself. You just have to invoke it. So basically, I'm just going to-- I'm going to just state some theorems and tell you that, actually, for linear models, this is almost a kind of-- well, I think it's kind of almost all done. Like you know everything about it, and I think they are pretty much matching upper bound, lower bound. So this is actually from a paper by Tom Zhang in 2012. Sorry 2002. So he's saying that this is for linear models. So linear models. Yeah, so suppose your x1 up to xn and Rd are n data points and p and q is this so-called concrete pairs. I guess-- I hope that probably you have seen these kind of things. So you hold the inequalities if 1 over p plus 1 over Q is equal to 1. And also we also assume that p is better than 2 and less than infinity. But in most of cases, you're going to just can think of p and q are both 2. That's the most important thing. And assume that p norm of x is less than C for every i. And then let's consider this hypothesis class F index by q. So this is the family of linear models where the norm of the linear model is bounded by B, right? Recall that we have actually talked about these kind of models, where p is 2 and q is 2, or maybe p is 1 and q is infinity. These kind of things. And before, we prove the Rademacher complexity bound. And now, we prove the covering number bound, which will also be for Rademacher complexity bound. And this rho is equal to L2Pn. L2Pn. This is the same thing as we have defined before. And then the log covering number epsilon, Fq-- sorry. Times rho is less than B square, C square over epsilon square. The ceiling doesn't matter. It's just trying to deal with the corner cases where this is 0 or something like that. So log 2 should be plus 1. And when p-- and when p is 2, q is 2, you can strengthen this. You can strengthen this slightly to something like log N, epsilon, F2, rho is less than B square, C square over epsilon square times log 2. I guess the base is also not important because it only change the constant. It's just copied from-- the base of the log doesn't really matter that much. I'm prepping here just for the sake of preciseness. So you can improve the B difference into something that depends on n or d, which doesn't matter that much, at least for our purpose. For other cases, if you care about the bound, it absolutely doesn't depend on e, then this matters. Otherwise, it doesn't matter that much, OK? And the way to remember this is just that this gives the same Rademacher complexity, right? So basically, if you use the discussion above, use the conversion above like we have done-- so this is of the form-- which form? This is of the form-- this thing. a to the R over epsilon square, right? Because-- here, you have a log, right? After taking a log is our epsilon square. And the R is B square C square. So using this conversion, you got that the Rademacher complexity is less than square root R over n, and where R is B square, C square. This is BC over square root n up to logarithmic factor, and this was very similar to-- this is the same thing as we have done before, right? So B was the norm of the classifier and C was the norm of the data. So you get multiplication of them over square root n. There are some small differences in terms of the logarithmic factor, which let's ignore just for simplicity. OK, and you can also show this for multilinear models-- sorry, multivariate. Multivariate linear functions. And I'm showing this just because this will be useful as a building block-- as building block of our future. Because when you have a li-- when you have networks, a multivariate linear model is the building block for a layer of network. And this-- in some sense, there's nothing really intelligent here, but I just have to state it so that I can use it later. So suppose you have-- OK, first, let's have a small definition. So definition. So suppose M is a matrix of this form, is m by n matrix. Let's define the 2 norm, 2, 1 norm. This is not the operative norm. This is just some arbitrary norm. So this is the 2, 1 norm, which is the sum of the 2 norm of the columns. So Mi is of dimension m. And you take the-- so basically, you first take the 2 norm of the column and then you take the 1 norm to group them. Right. And then, in this definition, M transpose 2 1 norm. This is basically the sum of the two norms of rows of n. Here's this definition. And then we're going to use this in the statement. So here is a theorem. The theorem is that if you consider-- here, I'm not going to do a p and q just for simplicity. So you just do the two-norm version. So p and q are both 2. So you consider the multivariate function, which outputs multiple outputs. And this W, let's say, of dimension m by d, and let's constrain the W, the 2 to 1 norm of W to be less than B. And, again, let C to be the average of the norm of the data. And then you get log N, epsilon, F, L2Pn is less than C square, B square over epsilon square, ln 2d times m. So it's kind of the same thing. The norm of the parameter times the norm of the data over epsilon square. But the norm of the parameter is measured by this 2 to 1 norm. Oh, sorry, 2 to the norm of double transpose. I think I have a typo here. So what's the 2 to 1 norm of W transpose? As I said, it's the sum of the two norms of the rows of W. So in some sense, there is nothing surprising here. In some sense, you just glue all the dimension-- you just treat all dimensions independently, in some sense. Like, for example, if you think about-- suppose you try W-- let's use a different color. Suppose you read W as W1 transpose up to Wm transpose, where you have m vectors, row vectors, and then Wx is really just-- you multiply W1 transpose with x up to Wm transpose x. So you can view this linear layer as m different linear functions, one-dimensional linear functions, and then the 2 to 1 norm is just the sum of the Wi 2 norm. So in some sense, you just sum-- you take the sum of the complexity measure. Sum of complexity measure of each of the model Wi transpose x. Right, so Wi 2 norm is the complexity measure of the linear function and you take the sum. So the proof is actually just the-- yeah, there's nothing more there. I think I have five minutes. Let me also mention another thing, which is useful for preparation for the deep nets. So this is also related how do we deal with bounding the log covering num-- the covering number. So you can also have the Lipschitz composition. This is a useful tool for us to deal with the covering number. And this is actually-- recall that we had this Talagrand Lemma, right? We had the Talagrand Lemma, which was like a Rademacher complexity, right? So you say something like a Rademacher complex of phi composed with H is less than some Lipschitzness of phi times the Rademacher of H. Something like this. So this was the Lipschitz composition for Rademacher complexity. And it turns out that for log covering number, the Lipschitz composition is even trivial. The Talagrand Lemma, I didn't prove it for you. I just said this is a fact. This is a theorem. And actually, proving it doesn't sound easy, as I mentioned. It's actually sometimes pretty complicated. It's pretty-- I think it's a challenging theorem to prove. And here, the Lipschitz composition becomes trivial for covering number. I think the fundamental intuition of the spirit is the same. It's just, for covering numbers, somehow this becomes super intuitive and explicit. So let me say the Lemma, but yeah, I think I have, so this is almost a trivial thing. So suppose phi is kappa-Lipschitz. And then let's say rho is this L2 norm thing. Then the log covering number of phi epsilon, phi composed with F-- I messed up my order of this argument in my notes for every occurrence after a certain point. So I have to fix it later. So epsilon-- so if you look at the log covering number of the composed function class phi composed with F, this is less than the log covering number of the original one, but you have a different radius or different granularity. So you basically have to cover the original one with epsilon over kappa granularity so that you can turn that into a epsilon cover of the new composed function. And this is pretty much just trivial because if you just take-- I guess I'll just take epsilon over kappa cover for F. And then-- so suppose this is-- let's call it C. And then phi composed with C is epsilon cover of phi composed with F because for every phi composed with F in this class, you can just first find f prime in C such that this rho f f prime is less than epsilon. So you first find this cover in C, and then you just compose it. So phi composed with f prime claim that this is actually a neighbor of phi composed with F. This is because if you look at the distance between this two thing, this is square root 1 over n, sum of phi of f prime zi minus phi of fzi. And you use the Lipschitzness. So this is less than 1 over n times kappa square. And then because f prime and epsilon over kappa close, so this is kappa times epsilon over kappa epsilon. So we are done. Yeah, so-- OK, I guess that's a good stopping point, and we'll continue next lecture about deep nets. Cool. Any questions? So the Lipschitzness being [INAUDIBLE]?? Yeah, yeah. Yes, so I should-- yes, that's right. The Lipschitz. [INAUDIBLE] Yes, so far, I have actually a one-dimensional function. Phi is a one-dimensional thing. I have output one-dimensional thing, and then you have a 1 to 1, R to R function phi. So there's no metric, but, yes. But if I have outputs of vector and then your phi is a vector to vector function, then you have to make a norm. Everything incompatible. The Lipschitz just has to be the same thing, compatible with the norm. [INAUDIBLE] Yes, yes, we're going to use just L2. OK, OK. Sounds good, OK. I guess see you on Wednesday. |
YaleCourses_Philosophy_of_Death | 10_Personal_identity_Part_I_Identity_across_space_and_time_and_the_soul_theory.txt | Professor Shelly Kagan: At the end of last class, I suggested that from here on out I'm going to be assuming that there is no soul. I'm going to be discussing the issues that we turn to hereafter from the perspective of the physicalist, the person who says that a person is basically just a fancy body--a body that can do certain special tricks, a body that can function in certain ways that we associate with being a person, a body that can P-function, as we put it. Now, I've given you my reasons for believing there are no souls. Basically, that the various arguments that might be offered for believing in souls don't seem very compelling upon examination, so there's no good reason to posit this extra entity. For the most part, then, I'm going to be putting aside soul talk. Periodically, I'll come back and talk about how some issue that we are considering might look from the perspective of somebody who does believe in souls. But, as I say, for the most part, I'm going to be assuming there are no souls. For those of you who still do believe in the existence of souls, I suppose you could take a great deal of the discussion that follows as some form of large conditional or subjunctive. If there were no souls, then here's what we'd have to say. So although I'll be largely talking from the perspective of the physicalist, if you haven't become convinced of the truth of physicalism, so be it. We'll at least explore what will we say about death if we've decided that people are basically just bodies? Now, you'll recall that at the start of the semester I said, in thinking about the question, could I survive my death? there were two basic things we had to get clear on. First, we had to get clear on, what am I? What are my parts? That's why we spent the last several weeks worrying about the question, am I just a body? Am I a body and an immaterial soul as well? Or perhaps, strictly speaking, just the soul? Having looked at that question, we're now going to turn to the second basic question, what would it be to survive? What would it be for a thing like that to continue to exist? Now, of course, we're going to ask most particularly, what would it be for a thing like that to survive the death of the body? Could it even make sense for a person to survive the death of his body? You might think the answer to that is no, if we are physicalists, but in fact, it's not so clear the answer to that is no. But in order to address that particular question--What is for me to survive the death of my body? Is that even a possibility or not?--we first have to get clear about the more general question, what is it for me to survive, period? Take the more familiar hum-drum case. Here I am lecturing to you today, Thursday. Somebody's going to be here, no doubt, lecturing to you next week, next Tuesday. The question of survival can be asked about that very simple case. Is the person who's going to be lecturing to you on Tuesday the very same person as the person who is standing in front of you lecturing to you now? Will that person survive the weekend? I certainly expect to survive the weekend. But what is it to survive the weekend? What is it? We might say, look, we've already got the beginnings of an answer. For me to survive until Tuesday, presumably is for there to be somebody, some person alive lecturing to you on Tuesday, and--here's the crucial point--for that person lecturing to you on Tuesday to be the very same person as the person lecturing to you today, on Thursday. If I were to be killed in a plane accident this weekend and there was a guest lecturer for you on Tuesday, there'd be somebody alive lecturing to you. But, of course, that wouldn't be me. So the question we want to get clear on is, what is it for somebody on Tuesday to be the same person as the person here talking to you on Thursday? We can ask the question more grandly, about larger expanses of time. Suppose there's somebody alive 40-odd years from now, in the year 2050. Could that be me? To ask, have I survived until 2050? is to ask, is that person who's alive in 2050 the very same person as the person who's standing here now lecturing to you? What is it for somebody in the future to be the very same person as this person who's here now today? Now, in thinking about this question, it's important not to misunderstand what we're asking. Some of you may misunderstand what I'm asking. Some of you may want to say, "Look, the person lecturing to you now has at least a fair bit of his hair. He's got a beard. Let's suppose that the person alive in 2050 is bald and bent over, has no beard. How could they be the same person? One's got hair, one doesn't. One's got a beard, one doesn't. One stands straight, one's crooked. It can't be the same person." That's the mistake that it's important for us to get clear about. So I'm going to spend some time talking about examples that I think we would not find puzzling, and work our way back up to the case of personal identity. So first I'm going to say some things about identity across time--or indeed initially, identity across space--with some familiar, hum-drum, material objects. So, let's start. Suppose you and I are walking along and we see a train. So let me draw the train first. I'm not a very good artist, but all right. There's our train. We start walking. I point to the caboose. Let's make this look more like a caboose, slightly more like a caboose. Just so it doesn't look too much like the locomotive. I point to the caboose and I say, "Look at that train." And we're walking along, we're walking along, we're walking along. We come to the end of the train and I point to the locomotive and I say, "Wow! Look how long that train is! That's the very same train I pointed to five minutes ago. We've been walking along it all this time." Now, imagine that you say--you wouldn't say anything as stupid as this, but imagine that you said this--you say, "This isn't the same train as the train we pointed to five minutes ago. After all, right now what you're pointing to is a locomotive, whereas five minutes ago what you pointed to was a caboose. A caboose isn't the same thing as a locomotive. How could you possibly say it's the same thing? Who could possibly make a mistake like that? The locomotive's got smoke coming out of it. The caboose doesn't. And so forth and so on. There's a lot of differences between the two. How could you make such a silly mistake?" Well, of course, what I would then want to say to you is, no, actually, you're the one who's making the mistake. I agree, of course, that a locomotive is not the same thing as a caboose. But I wasn't claiming that it was. Rather, initially when we started our walk, I pointed to a caboose, but by pointing to the caboose, I picked out a train. I said, "Look at that train." And what I was referring to wasn't just the caboose, but the whole, long, extended-through-space object, the train, of which the caboose was just a part. And when--At the end of our walk when I pointed to a locomotive and said, "Look at that train," by pointing to the locomotive, I was picking out a train, an entire train. This long, extended-through-space object, the train. And when I said, "This train that I'm pointing to now is the very same train as the train I pointed to five minutes ago," I'm not saying what is certainly false. I'm not saying the locomotive is the same thing as the caboose. Rather, what I'm saying is, the entire extended-through-space train that I'm pointing out now is the same train as the entire extended-through-space train that I picked out five minutes ago. And that claim, far from being false, is true. Now, as I say, none of us would make that mistake. But it's a tempting mistake if you're not being careful. And that mistake might mislead us if we start thinking about the personal identity case. But let's continue with the train for a bit. Suppose, as we're taking our walk, part of the train isn't visible. There's a large warehouse that's blocking the view. We're walking along the way. We see a caboose. I say, "Ha! There's a train." Then for a while we're walking, we don't see anything because all you can see is the warehouse. And then after we get past the warehouse, a very long, block-long warehouse, I see a locomotive and I say, "Hey look. There's a train." And then I ask you, "Do you think this is the same train as the train we pointed to before?" Now again, it's important not to misunderstand that question. That question is not asking, is the locomotive that we're pointing to now the same as the caboose that we pointed to earlier? No, of course not. The locomotive's not the same as the caboose. But that's not what I'm asking. What I'm asking rather is, remember earlier when I pointed to the caboose?" In doing so, and I started talking about a train, I was picking out some entire extended-through-space train. Right now, in pointing to a locomotive, I'm picking out not just the locomotive. I mean to be talking about an entire train. Some entire extended-through-space train. And I'm asking not about the locomotive and the caboose, but rather I'm asking about the trains that I pick out by means of the locomotive and caboose. Are they the same train? And the answer is, "Don't know; can't tell. The building's blocking the view." Suppose we had x-ray vision and could see through the building. Then the answer would be, "Well look, if what we've got is something like this, then of course, we do have one single train." The extended-through-space train I picked out at the end of our walk is the same as the extended-through-space train that I picked out at the beginning of our walk. But it might not turn out that way. It might turn out if I had x-ray vision, that what I'd see is this . Then the answer would be, "Ah, there's not one train here, but two trains." The extended-through-space train that I'm picking out when I point to the locomotive turns out to be a different train from the extended-through-space train I picked out when I pointed to the caboose. I don't have x-ray vision. I don't know which of these metaphysical hypotheses is the correct one. All right, easy enough with trains. We know how it works with trains. Now let's talk about something not a whole lot more complicated--cars. I used to have a car I bought in 1990. My ability to draw cars is even worse than my ability to draw trains. There's my car in 1990. It was new. It was sparkly. Then I drove it for some years and I got some dents and so forth and so on. Here's a smile. By 1996 or 2000, it wasn't looking so good. The sparkle had gone. It had a couple of dents. That was the car in 2000. By 2006, it had a lot of dents, 2006, when it finally died. All right, now we all understand the claim that the car I had in 2006 was the very same car as the car I had in 1990. Of course, again, you've got to be careful not to misunderstand what's being said. We all know that in 2006 the car had a lot of scratches and had gotten banged in on one side and pretty sorry looking in terms of the scrapes and the paint job and the rust. Whereas, the car in 1990, new and shiny and smooth. You might say the 2006 car stage is obviously not the same thing as the 1990 car stage. That's like thinking that the locomotive's the same thing as the caboose. But when I say it's the same car, I don't mean to be talking about car stages. I mean to be talking about a single thing that was extended through time. There I am, proud owner of my new car in 1990 and I say, "This is a car. It's a car that will exist for more than a few minutes. It's a car that will exist for years and years and years," though at the time I didn't realize it was going to last 16 years or longer. When I refer to my car--as opposed to what we could dub the car stage or the car slice--when I refer to the car in 1990, I mean to be talking about the entire extended-through-time object. In 2006, when I point to that sad heap and talk about, "I've had that car for 16 years." Well, I haven't had that car stage for 16 years. That car stage or that car slice, if we wanted to talk about it that way, has only been around for however long, months, years, a year. It hasn't been around for 16 years. But when I talk about that car, I'm picking not just the current slice or the current stage of the car, but the entire extended-through-time object. When I say, "That's the very same car I've had for 16 years," I mean, "Think of the object extended through time that I'm picking out by pointing to the current slice. That's the very same extended-through-time object that I picked out 16 years ago by pointing to what was then the current slice. The slices aren't the same; the car is the same. It's the very same car." Well, now let's imagine a somewhat more difficult case. At the end of 2006, my engine failed. I sold the car to a dealer, junk dealer. Suppose that in 2010 I see a car in the junk lot and it looks familiar to me. I say, "Whoa! That's my car." Is it or isn't it my car? This is sort of like the case with the factory blocking the view. 1990 to 2006, very easy. Saw the car every day in my garage. But here is a four-year--Instead of a factory blocking my view, it's the mists of time blocking my view. And I ask, "Same car or not?" Again, by this time, I imagine you don't need to be warned, but let me just warn you a couple more times. I'm not asking, "Is the car stage, the 2010 car stage, the same car stage as the 2006 car stage? Maybe not. Maybe obviously not. I'm asking rather, in pointing to the 2010 car stage, I mean to be picking out an entire extended-through-time entity, the car. And I'm asking, "Is that the very same extended-through-time entity as the extended-through-time entity that I used to own?" I wonder. And the answer is, "Don't know." The mists of time are blocking my view. I don't know the answer. But we know what the possibilities are. One possibility is that indeed it's the very same--I won't draw it all, 2008 and so forth. It could be the very same car. If we knew what it took to have the various stages of a car add up to the very same car, then that would be one possibility. But there might be a different possibility. It could have been that after I sold it to the junk dealer, he crushed it, turned it into a heap of metal and that was the end of my car. And the car I'm seeing on the dealer's lot in 2010 might be some other car with its own history. What we're wondering about is, is there a single--well, here's a piece of jargon--is there a single "space-time worm" here or are there two? When I look at the car in 2010 and say, "There's a car. I wonder if it's the same car," I'm asking about this thing that's extended--well, obviously through space, since cars take up some space--and through time. Looks a bit like a worm. So philosophers call them space-time worms. Is the space-time worm that makes up this car the same space-time worm as the one that made up my car? One worm there or two? And the answer might be, "Don't know, need to have more facts." But at least that's what the question is. Now metaphysically, there's different ways of trying to pose the set of issues that I've begun to talk about. Should we say, as we might say with the train, the train is made up out of the various cars, the locomotive, the caboose, and the intervening cars? So the train--that's the way we normally think about trains, at least the way I normally think about trains--the train's a bit like a sandwich, right? The metaphysically fundamental things are the caboose, the locomotives, the intervening trains. If they're glued together in the right way, they make up a train. What's the right kind of metaphysical glue for trains? Well, it's being connected with those little locks. That may or may not be the right way to think about what I've been calling car stages or car slices. On some metaphysical views, you might say, just exactly like with the train, the car stages are the metaphysically fundamental things and a car, something extended through time, is glued together like a sandwich from the car stages. And then, we might worry about what's the relevant metaphysical glue for cars. On other metaphysical views, no what's really prior is the car itself, and talking about car stages is a certain convenience, a kind of way of chopping up the fundamental thing, the car. So, to use an analogy that I think David Kaplan, a philosopher at UCLA offers, it's as though you have to think of it more like a bologna or salami that you can slice. If--For certain purposes you can talk about slices, but the fundamental thing's the salami. All right. In thinking about cars, should we say that the fundamental thing is the car stages and they get put together like a sandwich to make cars? Or should we think that the fundamental thing is the car extended through time and it can be sliced up to make car stages? For our purposes, I think we won't have to go there. It doesn't really matter. As long as we're comfortable talking about entire space-time worms, the cars, and the slices or the stages. We don't have to ask which is metaphysically prior. You should also notice that- I should also mention that there are other metaphysical views about what goes on when an object exists over time. I've been here helping myself to the suggestion that we should think about extension over time analogously to the way we think about extension over space. That's why I started with a spatial example, the train, and moved to the temporal example, the car. And there are those philosophers who think that's exactly the right way to think about it and those philosophers that think no, no, that's misleading. When an object is extended over time, really the entire object's right there at every single moment. These are interesting and difficult questions. But again, I think for our purposes, we don't have to go there. So I will help myself to this language of space-time worms, objects that extend not only over space but also over time. And distinguish the entire worm from the various slices or stages that either make up the worm or that we could slice the worm into. The point that I've been emphasizing is, well first point, of course, has been, "Don't confuse the stages with the entire space-time worm." The stages can differ without the entire space-time worm being a different worm. Second question I've hinted at that we're about to turn to, not literally turn to at the moment, but shortly we'll turn to is, "What's the relevant glue?" What makes two stages, stages of the very same thing? In the case of trains, as I say, it's fairly obvious. What is it in the case of cars? What makes the 1990 car stage a stage in the very same car, the extended through space and time worm car, as the 2006 stage? What's the metaphysical glue that glues these stages together? And the answer, not that there aren't puzzles about it, but the answer is roughly, "It's the very same car if it's the very same hunk of metal and plastic and wires." There was the car. A car is just some metal and plastic, rubber. And that very same hunk continued into 2000 and it continued into 2006. The glue, the key to identity across time for cars, is being the same hunk of stuff. Now, that doesn't mean it's got to be the same atom for atom. We know that's not true. Look, think about my steering wheel. Every time I grabbed the steering wheel to drive, I wore away thousands of atoms. You can lose some atoms and still be the very same steering wheel. Every now and then, I'd replace the tires on my car. But for all that, it was the same hunk of stuff. Now this raises an interesting issue. How many changes of the constituent parts can you have and still be the same hunk of stuff? If this was a class in which we were going to worry about the general problem of identity across time, this would be a problem we'd have to directly face. But since we are only looking at enough of the problem of identity to get to the question that we really want to think about, the nature of personal identity across time, I'm not going to pursue that. I just want to flag the thought that you can be the very same hunk of stuff, even if some of the constituent atoms have changed along the way. And even bigger parts. You can replace the headlights and still be the same hunk of stuff. At any rate, that's what's gone on in the car case, same hunk of stuff 1900-2006. And when I see the car on the junk dealer's yard in 2010 and ask, "Is that my car or not?" the answer lies in--if only we could know--is that the same hunk of stuff or not? That's what the key, the metaphysical glue is, being the same hunk of stuff. All right, let's turn now to the case we really wonder about, personal identity. Here's somebody lecturing to you in 2007, Shelly Kagan. We imagine there's somebody in 2050 and we ask, "Is that Shelly Kagan?" We'll call him "Mr. X." We ask, "Is that the same person or not?" Now again, at this point you're not going to be tempted by the mistake. I'm not asking, "Is this person stage Mr. X the same person stage as SK 2007?" Obviously not. SK 2007 has still got his hair, has the beard, stands up more or less straight. Mr. X is bald, doesn't have a beard. I suppose I should have drawn him bent. Can I do that? A little cane. I'm not asking, "Is the person stage Mr. X the same as the person stage SK 2007?" Sounds like a computer or something. Get the SK 2007! I'm not asking that. I'm asking, I'm saying, "Look, when you look at the current stage, the current person slice and think about the entire extended-through-time entity, the person that makes up Shelly Kagan, or that is Shelly Kagan, is that the very same person as the extended-through-time person that you got in mind when you point to the Mr. X 2050?" The stages are obviously different. But by looking at the stages, we pick out a space-time worm that makes up a person. And we're asking, "Is that the very same space-time worm as the one we picked out previously or a different space-time worm than the one we picked out previously?" And the answer, presumably, is going to be, "Well it depends on getting clear on whether the stages are glued together in the right metaphysical way." And so, what we'd like to know is, well, what does it take for two person stages to make up or be part of the very same extended-through-time person? What's the metaphysical glue that underlies being a single extended-through-time person? What's the key to personal identity? If we could get clear about what the answer to that metaphysical question, the key to personal identity, we'd at least know what we needed to find out to answer the question, "Is this one person or two?" Are the pieces glued together in the right way? Different question, the question that we're ultimately hoping to get an answer to. Could I survive my death? Well look, think again about the question we started with. Could I survive the weekend? To survive the weekend, there's got to be somebody who's alive, some person on Tuesday and that person's got to be the very same person as the person you're looking at now, you're thinking about now. Or to put it in terms of stages, that person's got to be--that stage, that slice has to be part of the very same extended-through-time space-time worm as this stage is. They've got to be glued together in the right way. We can't tell whether that's true until we know what the glue is. But at least we anticipate that, well, there will be somebody here on Tuesday who is glued together in that way, the right way, whatever that turns out to be. The stages will be glued together in the right way. Suppose I asked then, "Will I survive my death?" All right, so I'm going to be optimistic. I'm going to assume that I make it to 2040. 2040… I won't even be 90 yet. That's not too wildly optimistic. It's optimistic, but not wildly optimistic. So here's the SK 2040. We know that there's an extended through space and time, space-time worm, a person. Then let's suppose, sadly, 2041 my body dies. And I ask, "Could I survive my death, that is to say, the death of my body?" Well, we want to know, after 2041, let's say 2045, is there somebody who's a person, call him Mr. X. Could it be the case that there'd be a person in 2045, after the death of my body in 2041, could it be the case that there's a person who is part of the very same space-time worm that you're thinking about right now? Could that be or not? We can't answer that question until we are clearer about what does it take to have identity across time. What's the key to personal identity? What's the metaphysical glue? Once we get clear about what the relevant metaphysical glue is, we'll be in a position to start asking, "Could this happen or not?" All right, that's the question I want to turn to, then. What are the possible positions on this question? What's the key to personal identity? What's it to be the very same person? As we might put it somewhat misleadingly, what is it for "two" people to really be the same single extended-through-time person? Suppose we believed in souls. Then here would be a natural proposal. The metaphysical key to personal identity is having the very same soul. So suppose I was a dualist. I'd say, "Look, you're looking at a body, but connected in this intimate way with this body is a particular soul, the soul of Shelly Kagan. What makes it true that the person lecturing to you next Tuesday is Shelly Kagan, the very same person, what makes that true is that it's the very same soul. As long as this soul is here again on Tuesday, It'll be Shelly Kagan. If it's a different soul, it's not Shelly Kagan." That's the natural thing to suggest if we believe in the soul view. The key to personal identity--not the only thing a soul theorist can say, but the natural thing for a soul theorist to say--the key to personal identity is having the very same soul. Same soul, same person. Different soul, different person. Imagine that God or a demon or what have you, for whatever perverse reason, severs the ordinary connection between my body and my soul and then reconnects the wires, as it were, so that there's a different soul animating and controlling this body on Tuesday. For whatever perverse reasons, maybe to make some sort of philosophical point, that person decides to come in anyway on Tuesday and lecture to you about philosophy. According to the view that we're taking, which we'll now call the soul view, according to the soul view, it won't be me lecturing to you on Tuesday. Why not? Because we've just stipulated it's not the same soul. It's a different soul. The key to personal identity, according to the soul theory of personal identity, the key to personal identity is having the same soul. When I ask myself, "Will I survive the weekend?" what I'm asking is, "Will my soul still be around come Tuesday?" As long as my soul still exists and is functioning, it's still me. I'm still around. In fact--peeking ahead of course, and this is why we are often drawn to soul views--even if my body dies, as long as my soul continues to exist, I continue to exist. The key to personal identity, according to the soul view, is having the same soul. As long as my soul continues to exist, it's still me, whether or not my body's still alive. And it's precisely for this reason that at least the soul, belief in the soul, combined with the soul theory of personal identity, holds out the possibility of surviving my death. We may not know that the soul will continue to exist after the destruction of the body, but at least it seems like a possibility. Plato of course, as we know, tried to argue that we could know, that there was--there were good grounds for believing the soul would continue to exist. I've said I don't find those grounds so convincing. But even if we didn't think we could show that the soul would continue to exist, at least it could, it would make perfect sense to think about it continuing to exist. And so I could survive the death of my body. In contrast, it looks--Prospects don't look so promising for surviving my death of my body if we don't believe in dualism, if we're physicalists. If a person's just a P-functioning body, how could it be that after the death of his body he's still around? Well, we'll say more about that a little bit later. Come back to the soul view. It's me as long as it's the same soul. It's not me if it's a different soul. Now consider the following possibility. Suppose that over the weekend, at 3:00 a.m., Saturday night, Sunday morning, while I'm asleep, God replaces my soul with a different soul, hooks it up to the body, gives that soul, that replacement soul, all of my memories, all of my beliefs, all of my desires, all of my intentions. Somebody wakes up Sunday morning and says, "Hey, it's a great day. Wonderful to be alive. I'm Shelly Kagan. Got to get to work." Whatever it is. Says "I'm Shelly Kagan"; but he's not. According to the soul view, he's not. Because according to the soul theory of personal identity, to be me that person's got to have my soul. And in this story, he doesn't have my soul. My soul got destroyed, let's suppose, 3:00 a.m. Sunday morning. A new soul got created. It's not me. There's a person there, all right. It's a person that doesn't have a very long history. Maybe he'll go on to have a long history. But it's a different extended through space and time person than the one you're thinking about right now. Because, according to the soul view, to be me it's got to have the same soul and we just stipulated, not the same soul. Think about what that means. If God were to replace my soul Saturday night, I die. And the thing that wakes up Sunday isn't me. Of course, he'd think he was me. He'd think to himself, "I'm the very same person who was lecturing about philosophy last week." But he'd be wrong. It isn't the same person, because it's not the same soul. He'd be wrong and--notice this--there'd be no way at all he could tell. He could check his beliefs. He can check his desires. He can check his memories. But that's not the key to personal identity, according to the soul view. The key to personal identity, according to the soul view, is having the very same soul. You can't check that. You can't see the soul to see if it's the same one. So if this were to happen to him, he wouldn't be Shelly Kagan, the person who'd been lecturing last week. But there'd be no way at all he could know that. And now the question you would need to ask yourself is, how do you know this didn't happen to you last night? You woke up this morning thinking, I'm the very same person--Joe, Linda, Sally, whatever it is--the very same person who was in class yesterday. How do you know? How could you possibly know? If God replaced your soul with a new one, destroyed the old one, gave the new one all the old memories, beliefs, desires, goals, and so forth, that person who was in class last week, yesterday, died. The person who's here now hasn't been around 10 years, 20 years, what have you. You were born a few hours ago. And there'd be no way at all that you could possibly tell. How do you know, not only that it didn't happen to you last night, how do you know something like this doesn't happen every single night, every hour on the hour, every minute, every second? God whips out the old soul, destroys it, puts in a new one with--Maybe souls only last for a minute and a half. If that was happening, then people don't last very long. Bodies may last 20 years, 50 years, 80 years, 100 years, but people would only last an hour or, if it's every minute substitution, a minute. And you'd never possibly be able to tell. Now these worries were raised by John Locke, the great British philosopher, and he thought, this is too big a pill to swallow. This is too big a bullet to bite. We can't take seriously the suggestion that there's no way at all to tell whether it was still me from the one day to the next, from one hour to the next, from one minute to the next, just not plausible. It's not that there's anything incoherent about this view. It doesn't say anything logically contradictory about this view. You just have to ask yourself, "Could this really be what personal identity is all about? That there'd be no way at all to tell whether I've survived from one minute to the next, from one hour to the next?" Locke thought no, you couldn't possibly take this view seriously if you thought about what it meant. Notice, this is not an argument that souls don't exist. If you find this argument convincing, what it's an argument for is the claim that even if souls do exist, they may not be the key to personal identity. And so what we have to ask ourselves is, what's the alternative? What better suggestion is there for what we could point to as the metaphysical glue, the key to personal identity? And that's the question that we'll take up next time. |
YaleCourses_Philosophy_of_Death | 5_Arguments_for_the_existence_of_the_soul_Part_III_Free_will_and_neardeath_experiences.txt | Professor Shelly Kagan: All right. We've been talking about arguments that might give us reason to believe in the existence of an immaterial soul. The kinds of arguments we've been considering so far all fall under the general rubric of "inference to the best explanation." We posit--or the fans of souls posit--the existence of souls so as to explain something that needs explaining about us. I've gone through a series of such arguments, and the one that we ended with last time was the suggestion that we need to believe in the existence of a soul in order to explain the fact that we've got free will. The fact that we've got free will is something that most of us take for granted about ourselves. But the complaint then, or the objection to the physicalist, takes the form that we couldn't be a merely physical entity because no merely physical entity could have free will. But we've got free will, so there's got to be something more to us than just being a physical object. Now, if we push the dualist to explain what is it about free will that rules out the possibility that we are merely physical objects, I think the natural suggestion to spell out the argument goes like this, and this is where we were at the end of last time. The thought is that, there's a kind of incompatibility with being free and being determined. I mean, after all from the physicalist's point of view, we're just a kind of glorified robot, able to do all sorts of things that most robots in most science fiction movies can do. But still, in a sense, we're just a glorified physical object. We're just a robot. And robots, the objection goes, are programmed; they necessarily follow their program. More generally speaking, we might say, they're subject to deterministic laws--that, as physical objects, it's true of them that they must do what the laws of physics and laws of nature require that they do. And the laws of physics are--take a deterministic form, determinism being a bit of philosopher's jargon for when it's true of these laws that--or a physical--or a system--that if you set it up a certain way, cause and effect plays out such that, given that initial setup, the very same effect must follow. It's determined by the laws of nature that the effect that follows will follow from that cause. And so, if you rewind the tape and play it again over and over and over again, each time you set things up the very same way they must move or transform or change or end up in the very same state. Well, that's what determinism is all about. And intuitively, it seems plausible to many people that you couldn't have free will and be subject to determinism. Because the notion of free will was that even if I was in the very same spot again, the very same situation again, I could've chosen differently. So I wasn't determined or predetermined to make that choice. So if we were to spell out the argument somewhat more fully, it might be, "We have free will, but you can't both have free will and be subject to determinism or subject to deterministic laws." And every physical object, or every purely physical object, is subject to deterministic laws because the laws of physics are deterministic. You put these things together and you get the conclusion that we, since we've got free will, can't be a purely physical object. There must be something more than the purely physical to us. That's the argument I put up on the board at the end of last class. And here we've got it up here now. One, we have free will. Two, nothing subject to determinism has free will. Three, all purely physical systems are subject to determinism. So--a conclusion--we are not a purely physical system. To explain the fact that we've got free will, so the objection goes, we have to appeal to--we have to posit--the existence of a soul, something non-physical, something more than purely physical. Well, that's the argument. But I don't myself find the argument compelling. Now, the first thing to notice is that to get the conclusion we need all three premises. Give up the conclusion that we've got--Give up the premise that "we've got free will," it won't follow that we're non-physical. Even if something that did have free will would have to be non-physical, it wouldn't follow that we're non-physical. That's true for each one of the premises. Give it up, the conclusion doesn't go through. And the interesting thing is that each one of these premises could be plausibly challenged. Now, as I said last time, the subject of free will--or free will, determinism, causation and responsibility, this cluster of problems--is an extremely difficult and complicated physical problem. And we could easily devote an entire semester to discussing it. So all we're doing here is the most quick and superficial glance. But still, let me quickly point out why you could resist the argument from free will to the existence of a soul. First of all, as I just noted, the argument needs premise number one. It's got to be the case, to prove that we've got a soul--at least for this argument to work to prove that we've got a soul--it's got to be the case that we've got free will. Now, that could be challenged. There are philosophers who have said we certainly believe that we've got free will, but it's an illusion. We don't really have free will. Indeed, why don't we have free will? For precisely the reasons that are pointed to by the rest of the argument. They might say, "Oh, well, you know, we're physical objects; determinism is true of us. No physical object that's subject to determinism could have free will, so we don't have free will. Of course, we mistakenly believe we've got free will. We are physical objects that labor under the illusion that we have free will, but after all, free will isn't something that you can just see, right? You can't peer into your mind and see the fact that you've got free will. Yes, we've got the sense that we could've acted differently, but maybe that's an illusion." As I say, there are philosophers who've argued that way, have denied that we have free will and if we do conclude that we don't actually have free will, then we no longer have this argument for the existence of a soul. It's a way to avoid the argument; although, for what it's worth, I should mention I don't myself believe that it's false that we have free will. That is to say, I do think premise one is true. I myself think we do have free will. So although I don't like, I don't believe the argument is sound--premise one doesn't happen to be the premise I myself would want to reject. But there are other, there are two other key premises. What about premise number three, "All purely physical systems are subject to determinism." Well, we need that premise as well to make the argument go. Suppose we think, "Look, you can't have free will and determinism. You can't combine them." The view that you can't combine them is sometimes known as "incompatibilism" for the obvious reason. It's the view that these two things are incompatible. You can't have determinism and free will. Suppose we do believe in incompatibilism and believe that we've got free will. It would follow then that we're not subject to deterministic laws. Well, the dualist says, "That shows us that we have to believe that there's something non-physical about us. Because after all, premise three: ‘All purely physical systems are subject to determinism.' Isn't it true after all that the basic laws of physics are deterministic laws?" And the answer is, "Well it's not so clear that it is true." Which is just to say that premise three of the argument can be rejected as well. Now, at this point I have to just confess, as I've confessed at other times before, three is a claim about empirical science. What does our best theory about the laws of nature tell us? And I'm no scientist and I'm no specialist in sort of empirical matters, and believe me, I'm no authority on quantum mechanics, our best theory of fundamental physics. Still, I take it--I gather--here's what I'm told--that the standard interpretation of quantum mechanics says that, despite what many of us might've otherwise believed, the fundamental laws of physics are not, in fact, deterministic. What does that mean? Suppose we've got some sort of radioactive atom, which has a certain chance of decaying. What does that mean? Well, it means that, you know, there's maybe, let's say, an 80 percent chance that in the next 24 hours it will break down. Eighty percent of atoms that are set up like that break down in the next 24 hours; 20 percent of them don't. Now, according to quantum mechanics under the standard interpretation, that's all there is to say about it. You have an atom like that, 80 percent chance in the next 24 hours it will break down. Suppose it does break down! Can we say why it broke down? Sure. We can say, "Well, after all there was an 80 percent chance that it would." Take an atom that after 24 hours hasn't broken down. Can we say why it hasn't broken down? Sure. There was a 20 percent chance that it wouldn't. Can we explain why the ones that do break down break down and the ones that don't break down don't break down? No. All we can say is, there was an 80 percent chance it would, 20 percent chance it wouldn't, so most of them do, some of them don't. That's as deep as the explanation goes. There is nothing more. Now, you know, when we've got our deterministic hats on, we think to ourselves, "There's got to be some underlying causal explanation, some feature about the break-down atoms that explains why they broke down and that was missing from the non-break-down atoms that explains why they don't break down. After all, determinism, right? If you set up the atoms exactly the same way, they've always got to break down." But the answer is, according to the standard interpretation of quantum mechanics, that's not how it works. All there is to say is, "Some of these are going to break down, and some of these won't." The fundamental laws of physics, according to the standard interpretation of quantum mechanics, are probabilistic. Determinism is not true at the level of fundamental physics. Well, that's what I'm told. Believe me, I'm in no position to say, but that's what I'm told. And of course, if that's true, then premise three is false. It just isn't true that all purely physical systems are subject to determinism. So even if it does turn out that you can't have free will and determinism, that doesn't rule out the possibility that we are purely physical objects, because not all purely physical systems are subject to determinism. If determinism isn't true of us at the fundamental level, then even if you couldn't both have determinism and free will, we could still have free will, and yet, for all that, still be purely physical systems. While I'm busy pointing out ways in which the argument doesn't succeed, I also want to just take a moment and mention that premise two is also subject to criticism. Premise two was the incompatibilist claim that, "nothing subject to determinism has free will." You can't combine them. They're incompatible. Now, incompatibilism, I take it, is probably something like the common-sense view here. It's the view that probably most of you believe, but again, it's worth noting that philosophically it can be challenged. There are philosophers--and here I'll tip my hat and say, I'm one of them--there are philosophers who believe that, in fact, the idea of free will is not incompatible with determinism. So even if determinism were true of us, that wouldn't rule out our having free will, because you can--appearances to the contrary notwithstanding--have both determinism and free will. They're compatible. Hence, this view is known as compatibilism. If we accept compatibilism, we'll be able to say, "Look, maybe we have free will and determinism is true of us; but for all that, we're still just purely physical systems." Even if quantum mechanics was wrong and somehow, you know, at the macro level all the indeterminism boils out--whatever--and at the macro level we are deterministic systems, so what? If a deterministic system could nonetheless have free will, we could still be purely physical systems. Now, mind you, I haven't said anything today to convince you of the truth of compatibilism, nor am I going to try to do that. My point here was only to say we shouldn't be so quick to think that we have to believe in the existence of a soul in order to explain our having free will. It takes all of the premises of the argument to get the conclusion that the soul exists. And each one of the premises can be challenged. And here I mean not merely, well, logically speaking, you know, of course you can reject any premise of any argument. No. I mean, there are reasonable philosophical or scientific grounds for worrying about each one of the premises. The argument requires a lot. That doesn't prove that the argument fails, but it does mean that you're going to have your work cut out for you if you're going to use this route to arguing for the existence of a soul. All right. Let's recap. As I said, we've been considering different kinds of arguments for the existence of a soul, each of which appeals to some feature about us--our creativity, our ability to feel, the fact that we have a qualitative aspect of experience, our ability to reason--what have you. Some fact about us that calls out for explanation, and the claim on the part of the dualists was, we couldn't explain it without appealing to a soul. And I've argued--I've shared with you my reasons for thinking that those arguments are not compelling. But notice that all of the kinds of considerations I pointed to so far are what we might think of as everyday, familiar features about us. It's an everyday occurrence that we can think and reason and feel and be creative, or choose otherwise and have free will. Maybe the better arguments for the soul focus not on the everyday but on the unusual, on the supernatural. Here we might then have an entire other family of arguments, set of arguments--again, still of the form "inference to the best explanation." Maybe we need to posit the soul in order to explain ghosts. Maybe we need to posit the soul in order to explain ESP; maybe we need to posit the soul in order to explain near-death experiences. Maybe we need to posit the soul in order to explain what goes on in séances or communications from the dead or what have you. For any one of those, we could again run an argument where we say, "Look, here is something that needs explaining. The best explanation appeals to the soul." Now, I'm going to be rather quicker in discussing this family of arguments, but let me take at least a couple of minutes and do something about that. Take, for example, near-death experiences. This is something that you read a bit about in the selection from Schick and Vaughn in your course packet. The basic idea was probably familiar to most of you anyway, that the following thing happens with people who, you know, maybe their heart goes into a cardiac arrest--what have you. They die on the operating table, but then they're brought back to life, as we put it. And many such people, when we question them afterwards, have a very striking experience. And one of the things that's striking is, how similar the experience is from person to person and from culture to culture--that they've got some notion, as they were dead on the operating table, of leaving their body. Perhaps they begin to view their body from up--floating up above it. Eventually, perhaps, they leave the operating room altogether in this experience that they're having, and they have a feeling of joy and euphoria; they have some experience of going through a tunnel, seeing some light at the end of the tunnel. Perhaps at the other end of the tunnel they begin to have some communications or see some loved one who has died previously or perhaps some famous religious person in their--in the teaching of their tradition--their religious tradition. They have the sense that what they've done is basically died and gone to heaven. But then suddenly they get yanked back, and they wake up, you know, in the hospital room. So they've had near-death experiences. Or perhaps a better way to put it would be they've had death experiences but then have been brought back to life. Now, there it is, right? You survey people, and people have these experiences. And now we have to ask ourselves, "What explains this?" And here's a perfectly straightforward and natural explanation. These people died. Their bodies died, and they went to the next world. They went to the next life. They went to heaven but then were yanked back. Now, their bodies were lying there on the operating table; their bodies weren't in heaven. So something non-bodily went to heaven. That's how the explanation goes. It's a natural, straightforward explanation of what's gone on here. Hence, inference to the best explanation. We need to posit the soul, something immaterial that survives the death of the body, that can leave the body, go up to heaven; though, as it happens in these cases, the tie is never completely broken. They get yanked back; the soul gets yanked back by whatever cause, and reconnected to the body. It's as though we might think of there being two rooms, to use a kind of analogy here. There is the room that this world represents, this life represents. And what happens in these experiences is that your soul leaves this room and goes into a second room, the room of the next world or the next life, but for various reasons, isn't allowed to stay in the next room. It gets yanked back to this room. Well, that's a possible explanation. And in a moment, I'll ask whether it's the best possible explanation, but before we do turn to that question, there is an objection to this entire way of looking at things that's probably worth pausing for a moment and considering. The objection is similar to the kind of dismissive attitude that we saw at the beginning of the course about the question, "Could I survive my death?" Well, duh. Could there be life after there is no more life? Well, of course not. Here the objection says, this two-room notion's got to be mistaken. It can't be that what's going on in near-death experiences is that people are reporting about what it's like to be dead because--so the objection says--they never really died. After all, 20 minutes later, or whatever it is, there they are up and about. Well, not up and about; they're presumably lying in their hospital beds, but they're clearly alive. Hence, it follows that they never really died. Or, if you want, you could say maybe they died, but since they obviously didn't die permanently--after all they were brought back to life--how could they possibly tell us what it's like to be permanently dead? How can we take their experiences as veridical reports of the afterlife? Because what we want to know is what is like to be permanently dead, and these people were never permanently dead. So whatever unusual experiences they may be having, they are not reports of the afterlife. That's how the objection goes. Although, I think, I was pausing for a moment to raise that objection, it's not an objection that I think we should take all that seriously. Suppose we were to agree, all right, strictly speaking these people didn't die. Or strictly speaking they didn't die, certainly at least, permanently. Does it follow from that that their experiences should not be taken as evidence of what the afterlife is like? I think that's really a misguided objection. Suppose somebody said, "Look, I spent 20 years living in France, and then I came back to the United States. And so I want to tell you what it's like in France." And somebody says, "You know, you never really moved to France permanently. So your experiences in France, whatever they are--interesting as they may be--can't really cast any light on what it would be like to permanently move to France. You'd say, "Give me a break!" Right? "It's true that, of course, I didn't move to France permanently. Still, I have some experience of France. And so I can--a great deal after all, 20 years--I can give you a pretty good idea of what it's like to live in France, even if I didn't move there for the rest of my life without ever coming back." You can't say quite as much if you've only been in France for a couple of days before coming back, but still you can say something relevant. Indeed, suppose I never went into France at all. Suppose all that happened was I stood right on the border and peered into France, talked to some people in France. They were on the French side of the border, I was on the other side, but I talked to them for a while. Still, I never went in, but for all that I might have something helpful to say about what it's like in France. Well, if that's the right thing to say about the France case, then why not say the same thing about the near-death experience case? Even if these people didn't stay in the second room, they didn't stay dead, they had some experience of being dead. Isn't that relevant to what it would be like to be dead? Or even if we say, "No. Strictly speaking, these people didn't die at all. They were just on the border looking in. They never, strictly speaking, died at all." So what? They were on the border looking in. To suggest that that couldn't be relevant evidence is like saying I can't tell you anything interesting about what's going on in the hallway right now, because after all I'm not in the hallway; I'm here in the lecture hall. So what? Even though I'm here in the lecture hall, I can see into the hallway and tell you what's going on in it. So attempts to dismiss the appeal to near-death experiences on what we might call philosophical grounds--this would be the bad notion of philosophy--on philosophical grounds, I think that's got to be misguided. Still, that doesn't mean that we should believe the argument for the existence of the soul from near-death experiences, because the question remains, "What's the best explanation of what's going on in near-death experiences?" Now, one possibility, as I suggested, was what I called a second ago the "two-room explanation." There's the room of this life, and there's the room of the next life and people who have near-death experiences either temporarily were in the second room or else at least they were glancing into the second room. That's one possible explanation. But of course, there's a different possible explanation--the one-room explanation. There's just life, this life, and as you come very close to the wall of the room, things end up looking and seeming and feeling rather different than they do in the middle of the room. Now, maybe the one-room metaphor is not the best metaphor, because it immediately prompts the question, "Well, what's on the other side of the wall?" And of course, the physicalist's suggestion is there isn't anything on the other side of the wall. So maybe a better way to talk about it would just be: Life's a biological process; we're all familiar with that process, sort of, in its middle stretches. In its closing stretches, some fairly unusual biological processes kick in. In rare, but not unheard of, cases, some people begin to have those unusual biological processes and then return to the normal biological processes and can talk about what was happening in the unusual biological processes. Which is just to say, we need to offer a biological/physical explanation of what goes on in near-death experiences. Now, mind you, that's not yet to offer the physical explanation; it's just a promissory note. We now have two rival explanations, the soul, dualist, explanation that we went into the other world and the physicalist, promissory note that we can explain the white lights and the feeling of euphoria and seeing your body from a distance in physical terms. We don't really have very much of a physical explanation until we begin to offer scientific accounts of each of those aspects of near-death experience. But this is, in fact, an area on which scientists work. And you saw some of the beginnings of an explanation offered in the reading by Schick and Vaughn. So, for example, when the body is in stress, as would likely happen toward the end of the biological processes, when the body is in stress, certain endorphins get released by the body. Perhaps that explains the feelings of euphoria. When the body is in stress, we have various unusual stimulations of the visual sections of the brain, and perhaps that explains the white light or the feeling of compression in the tunnel. Now, again, I'm not any kind of scientist and so I'm not in any position to say, "Look, here are the details of the explanation." But you get the beginnings of that sketched in the readings, and it's a judgment call you've got to make. Does it seem more plausible that we can explain these experiences in terms of the traumatic stress that your body and brain is going through when you are near dying? Or is it more plausible to suggest, "No. What's happened here is a soul has been released from connection with the body." For my money, I find the beginnings of the scientific explanation sufficiently persuasive and sufficiently compelling that I don't find the argument from near-death experience--as an argument for the existence of a soul--I don't find it especially persuasive. Of course, there are various other things we could appeal to in terms of supernatural occurrences, right? I've only mentioned--only discussed now in detail--one of them. But there are a variety of things about people who can communicate from the dead or ghosts or séances or what have you. And what the physicalist would need to do for each one of those--For each one of those you can imagine a dualist who says, "We need to believe in a soul so as to explain séances. How do we explain the fact that the person who's conducting the séance knows things about, your history that only your dead uncle would know?" The dualist can explain that by appealing to ghosts and the like. How does the physicalist explain things like that? Short answer is, I don't know. I'm not the kind of person who makes it his business to try to explain away those things in physicalist, naturalistic, materialistic, scientific terms. But there are people who make it their business. So, for example, there's a magician--The question is not, could I explain to you how the séance manages to do the amazing things that it does? You're wasting your time asking somebody like me. The person to ask is a magician, somebody whose profession it is to fool people and make it look like they can do things with magic. So in fact, there are professional magicians who make it their business to debunk people who claim to genuinely be in contact with the dead and the like. There's a magician, I think his name is The Amazing Randi, who has a sort of standing offer; he says, "You show me what happened in the séance or in communication with the dead or what have you, and I'll show you how to do it. I'll debunk it for you." Spoiler alert. And he has a standing offer, he says, "I'll pay whatever the amount is, $10,000 to the first person who can document some effect done in supernatural terms that I can't reproduce through trickery." So far he's never had to pay out. Well again, that doesn't prove the dualist is wrong. It could be that there are genuine séances. It could be that there really are ghosts. It could be that there really is communication from the dead. As is typically the case, you've got to decide for yourself what strikes you as the better explanation. Is the supernatural, dualist explanation the more likely one? Or is the physicalist explanation the more likely one? Look, you have a dream where your dead mother has come back to talk to you. One possible explanation, the dualist, that's the ghost of your mother, immaterial soul that she is, communicating to you while you're asleep. Second possible explanation, it's just a dream. Of course you dream about your mother because your unconscious cares about her. What's the better explanation? We don't have the time here to go case, by case, by case, and ask ourselves, "How does the evidence fall down one side versus the other?" But when I review the evidence, I come away thinking there's no good reason to move beyond the physical. So again, let's recap. One group of arguments for the existence of a soul says, "We need to posit a soul in order to explain something, whether it's something everyday or something supernatural." The existence of a soul would be the beginnings of a possible explanation. But the question is never, "Is that a possible explanation?" but, "Is it the best explanation?" And when I review these various arguments, I come away thinking the better explanation falls with the physicalist. Mind you, I don't want to deny that there are some things the physicalist has not yet done a very compelling job of explaining. In particular, as I've mentioned previously, I think there are mysteries and puzzles about the nature of consciousness, the qualitative aspect of experience, what it's like to smell coffee or taste pineapple or see red. It's very hard to see how you explain that in physicalist terms. So to that extent, I think we can say the jury may still be out. But I don't think what we should say is, "The better explanation lies with the dualist." Because I think positing a soul doesn't really yet offer us the explanation. It just holds out the promise of an explanation. So at best that's a tie, and hence, no compelling reason to accept the existence of a soul. It would be one thing if we could see that no conceivable physicalist explanation could possibly work. But I don't think we're in that situation. All we're in right now is, perhaps in existence of that with regard to consciousness, maybe some other things, we don't yet see how to explain it. But not yet seeing how to explain it is not the same thing as seeing that it can't be explained on physicalist terms. Of course, again, if we had a dualist explanation with some details really worked out, maybe we'd have to say, "Look, this is the better explanation." But dualism doesn't so much offer the explanation typically as just say, "Well, maybe we'd be better off positing something immaterial." That, I think, is not a very compelling argument. Well, let's ask. What other kinds of arguments could be offered for the existence of a soul? I want to emphasize the point that the various arguments that I have been talking about so far, although they have this common strand--"inference to the best explanation"--are each separate and distinct arguments. One of them might work even though the other ones don't work. But I want to turn now to a rather different kind of argument. The argument I'm about to sketch is a purely philosophical argument, not really so much a matter of who can explain this or that feature of us better than anybody else. It's an argument that doesn't seem to have any empirical premises; it works from purely armchair philosophical reflection. And the striking thing is that many people find this a pretty compelling argument. The argument I'm going to give traces back to Descartes, the great early modern philosopher. Well, I'm not going to follow the details of this argument, but the basic idea goes back to Descartes. And it starts by asking you to imagine a story. So I'm going to tell the story in the first person. I'm going to tell about myself, but you know, you'll find the argument sort of, perhaps more persuasive if, as I tell the story, you imagine the story being told about you. So each one of you should translate this into a story about yourself. You know, your morning. So this is a story about my morning. Imagine--this didn't, of course, actually happen, but imagine--the crucial point here is simply that we can imagine this story happening, not even that we think it's empirically possible, just it's conceivable, it's an imaginable story. All right. So suppose that I woke up this morning, that is to say, at a certain point I look around my room and I see the familiar sights of my darkened bedroom. I hear, perhaps, the sounds of the cars outside my house, my alarm clock ringing, what have you. I move out of the room toward the bathroom, planning to brush my teeth. As I enter the bathroom, it's much more light, I look in the mirror and--here's where things get really weird--I don't see anything. Normally, of course, when I look in the mirror I see my face. I see my head; I see the reflection of my torso. But now, as I'm looking into the mirror, I don't see anything at all. Instead, I see the shower reflected behind me. Normally, that's blocked of course by me, by my body. But I don't see my body. Slightly freaked out, I reach for my head, or perhaps we should say I reach for where I would expect my head to be, but I don't feel anything there. Glancing down at my arms, I don't see any arms. Now, I'm really panicking. As I begin trying to touch my body, I don't feel anything. I don't--Not only can't I feel anything with my fingers, I don't have any sensations where my body should be. Now, we could continue this story, but I've probably said enough for you to grant that what I've just started doing--a novelist could do a better job of telling the story than I just did--but what I've just done was basically imagine--I've imagined a story in which I discover that my body doesn't exist. Or I've imagined a story in which my body has perhaps ceased to exist, or I've imagined a story in which I exist, or at least my mind exists. You know, I'm thinking thoughts like, "Why can't I see my body in the mirror? Why can't I feel my head? What's going on?" I'm panicking, right? We've got a story in which I'm thinking all sorts of thoughts; my mind clearly exists, and yet, for all that, my body does not exist. We could--certainly it seems--imagine that possibility. Now, the brilliant thing about this argument is it goes from that to a conclusion about there being a difference between my mind and my body. What we've just done, after all, is imagine that my mind exists but my body does not. Now, what does that show? Descartes says what it shows is the mind and the body must be two logically distinct things. The mind and the body cannot be the same thing. Because, after all, what I just did was imagine my mind existing without my body. How could I even do that, even in imagination? How could it even be possible to imagine my mind without my body, if talking about my mind is just a way of talking about my body? If they're really, bottom line, metaphysically speaking, the same thing, then you couldn't have one without the other after all. So here's a podium. Try to tell a story in which this podium exists but this podium does not exist. You can't do it, right? The podium is just one thing, the podium. And if it is just one thing, you could tell a story in which it exists; you could tell a story in which it doesn't exist. But you can't tell a story in which it exists and doesn't exist. If I can tell a story in which A exists and B doesn't exist, it's got to follow that A and B are not the same thing. Because if B was just another word for, another way of talking about, A, then to imagine A existing but B not existing would be imagining A existing but--well, B is just A--A not existing. But of course, you can't imagine a world in which A exists but A doesn't exist. Put the same point the other way around: If I can imagine A without B, then A and B have to be logically distinct things. They cannot be identical. But since I can imagine my mind existing without my body, it follows that my mind and my body have to be logically distinct things. They cannot be identical. My mind cannot just be a way of talking. Talking about my mind cannot just be a way of talking about my body. Now, it's a very cool argument. You know, philosophers love this argument. And I've got to tell you, to this day there's a debate in the philosophical community about whether or not this argument works. It's one thing to be clear--a couple of things to be clear about. What exactly is this argument not doing? The argument is not saying, "If something is possible, if I can imagine it, it's true." No. I can imagine unicorns. It doesn't mean unicorns exist. That's not what the argument is saying. The argument is only making a much more specific claim. If I can imagine one thing without the other, they must be separate things. Now, of course, it could still be that in the real world the one thing cannot exist without the other. There may be some sort of metaphysical laws that tie the two things so tightly together that you'll never actually get one without the other. That's not the question. The point is just if I can at least imagine the one thing without the other, they must in fact be two separate things. Because if there was really just one thing there, you couldn't imagine it without it. Since I can imagine my mind without my body, it must be the case that my mind is something separate and distinct from my body. Otherwise, how could I imagine it existing without the body? If they were the same thing, I couldn't--I can't imagine the body existing without the body. If the mind is just a way of talking about the body, how could I imagine the mind without the body? Since I can imagine the mind without the body, it follows that they're separate. So the mind is not the body after all. It's something different. It's the soul. Is that a good argument or not? That's where we'll start next time. |
YaleCourses_Philosophy_of_Death | 4_Introduction_to_Platos_Phaedo_Arguments_for_the_existence_of_the_soul_Part_II.txt | Professor Shelly Kagan: We've been talking about the question, "What arguments might be offered for the existence of a soul?" And the family of arguments that we're considering initially are arguments that get known as inference or inferences to the best explanation. The thought is that there's something about us that needs explaining. We can't explain it in terms of… in purely physical terms. And so we need to appeal to, we need to posit, the existence of, a soul. Now, I'll come back to that sort of argument in just a minute, but let me bracket that for a moment and say something about Plato. Starting next week, we're going to be looking at Plato's dialogue, the Phaedo. And so although I'll be saying a great deal about the Phaedo once we turn to it, I want to just take a minute or two and say a couple of introductory remarks. I don't know how many of you have not read any Plato before, but for those of you who haven't, I actually think you're in for a treat. Plato is not only one of the greatest philosophers in history, he wrote his philosophy in the form of dialogues. That is to say, plays, in which various characters sit around or stand around and argue about philosophical positions. The particular dialogue that we're going to be reading, the Phaedo, is set at the death scene of Socrates. As I'm sure you know, Socrates was put on trial, condemned to death for corrupting the youth of Athens--and perhaps, among other things, for arguing philosophy with them. And he's given hemlock, poison, and he drinks it and he dies. Now, this is a historical event. Socrates had a circle of friends and disciples that he would argue philosophy with. One of his disciples was Plato. Plato then grew up and wrote philosophical works. Plato does not typically appear in his own dialogues. Or, if he does, he's only there as a minor character. In fact, if I recall correctly, Plato's mentioned as not being there on the day that Socrates dies. So, how do we know, if we've got this play, whose position is Plato's position? And the answer--the short answer--is, Socrates, the character Socrates in the play, represents Plato, the author of the play's, philosophical views. Now, in fact, if this were a class in ancient philosophy, we'd have to complicate that picture, because it's fairly clear that by late in Plato's career Plato has philosophical views that are very much unlike the views of his teacher, Socrates. And yet Plato continues to not appear in the dialogue. Socrates continues to be sort of the hero. And so scholars debate which of the views put forward by Socrates in which ones of the dialogues represent views that belong to the actual historical figure Socrates, and which of the views put forward by the character Socrates in which of the dialogues represent views that are actually not held by the historical Socrates, but were instead held by the historical Plato and were merely put in the mouth of the character Socrates. Scholars distinguish between the early Platonic dialogues, the so-called Socratic dialogues, where the thought is, those are the views of Socrates, the actual historical figure. And then there's the late dialogues, where even though Socrates appears, most scholars believe those are probably not the views that the historical Socrates actually believed. You have middle dialogues where you have to worry about whose views are whose. But we're not going to worry. This is not a class in ancient philosophy. So for our purposes, we don't have to ask ourselves when Socrates in the dialogue says something, is this a view that the dead man Socrates actually would have held or is this simply a view that the dead man Plato put in the mouth of the character Socrates? For our purposes, it won't really matter. I'll take every view that Socrates puts forward as a view of Plato's, though I'll typically sort of run back and forth sort of in a careless fashion. I'll say, "Plato holds" or "Socrates argues," because for our purposes it's all the same. But there's one other complication that you've got to be warned about, which is this. Because these are dialogues and they take the form of philosophical arguments, people put forward views and then, over the course of the discussion, change their minds about things. And they take them back. And maybe something similar is going on when Socrates says something. Because, after all, this isn't Plato saying, "Here's what I believe explicitly." He's just writing a dramatic play about philosophy. And so sometimes we'll find ourselves thinking, "You know, there's an argument here that Socrates is putting forward. But maybe it's not a very good argument." And it will, at least, be worth pausing periodically to ask ourselves, maybe Plato realized it wasn't a very good argument. We can often better understand the dialogues by seeing Socrates as putting forward certain positions that he does not think are altogether adequate. And he modifies them or revises them or introduces new positions to deal with some of the difficulties that he was setting himself to be open to earlier. As I say, don't worry about any of those details now, but it's a point to keep in mind as you read the dialogues. So that's all I really wanted to say by way of introduction. You should start reading the Phaedo for next week. We'll be talking about the Phaedo starting some time next week and we'll continue the discussion of the Phaedo for at least a bit of, maybe all of, the week after that. In the case of Plato, I'm going to make an exception. Normally, I will mention our readings, but I won't spend a lot of time actually discussing them in detail. That's why you have to think of the readings as complementing the lectures or think of the lectures as complementing the readings. I'm not just giving the Cliff Notes, as it were, of the readings. Nonetheless, in the case of the Phaedo, I am going to spend more time actually saying, "Here's what I think the first main argument is. Let's try to reconstruct it in terms of its premises and its conclusions. Here are some objections I raise. Here is then the next argument that Plato offers. Let's try to get that up in premises." Even there, I won't be spending time reading out loud long passages from the Phaedo. But, in some sense, I'll be giving a closer commentary of the Phaedo than I'll do for the other readings. So, still, what you should do is start reading it for next week. The topic of the Phaedo, as I say, is set on Socrates' last day. At the end of the dialogue, he drinks the hemlock and he dies. And perhaps unsurprisingly, what he does with his friends up until that moment is, he argues about the immortality of the soul. Quite strikingly, Socrates is not upset. He's not worried about the fact that he's going to die. He actually welcomes this in a certain way, because he believes his soul is immortal. And so, in addition to philosophical arguments for and against the existence and immortality of the soul, we end the dialogue with a quite moving death scene, one of the great death scenes, if we could call it that, of western civilization. Anyway, as I say, that's all for next week. So let's return now to the question, "How might we argue for the existence of the soul?" Initially, last time, we considered a set of or a subset of arguments that basically said, "Look, there's got to be more to us than just material objects. People can't just be machines, because machines can't reason. Machines can't think." And I said, "That doesn't seem to be a compelling argument." After all, chess-playing computers, it seems, can reason. They have beliefs about what I'm likely to do next. They have desires about the goals that they're trying to achieve. They reason about how best to defeat me. And it's worth pointing out that--a point that I didn't make last time--it's worth pointing out that, what the computers, at least the best chess-playing computers don't do. Indeed, no computer actually does this. You might think that what a computer, what a chess-playing computer does is just this. It calculates every possible branch, every possible game from here on out. And then it sort of works backwards. "Oh, these are the ones where I'll win." And so it only makes the move where it can sort of look ahead 20 moves, right, and see which branches have the computer winning. That is not the way chess-playing programs work. For the simple reason that the number of possible chess games is so huge, that computers can't calculate it. They'd be busy for thousands of years. We can do that sort of: When you play tic-tac-toe with your seven-year old nephew or niece, you just look ahead and work backwards. "Well, if I do that, he'll do that and he'll do that and then he wins, so I won't do that," right? But we can't do that with chess. There's just too many games. So how do chess-playing programs, and particularly the best chess-playing programs, how do they work? Well, they play chess the same way you do. They have various ideas about which pieces are more powerful and so they're more important to protect. They've got various ideas about which strategies tend to be successful. What sorts of dangers come along with them? If you're a serious chess player, you might study some of the great games of chess history. And indeed, when they program these things, the programmers will feed in game after game after game of the great chess games in history. And then armed with all of that, you sort of do your best. And when you lose a game, you kind of make a mental note to yourself, "That really screwed me up. Let me try something different next time." And you avoid those sorts of moves. That's how chess-playing programs work as well. Jumping ahead, let me make a remark about this, because this is going to be relevant for something I'll get to in a couple of minutes. What this means--what this, the implication--is that if you're playing a great chess-playing program, it's not as though the way to tell what it's going to do is to study its program and think it through. The people who design these programs, presumably fairly decent chess players themselves, the people who design these programs, when they're playing the programs they're not thinking to themselves, "Let's see. I programmed this computer so that when I move a queen forward to this space, it should come out with a bishop." That's hopeless. Because the program is constantly revising its strategies, in light of what's worked and what hasn't worked in the past. When the programmers play these programs or indeed when anybody, a good chess player, plays these programs, the best way to try to beat them is simply ask yourself, "What's the best move to make right now?" The odds are the computer's going to make the best possible move. Treat the computer as though it were just a great chess player. And indeed, the best programs are great chess players. There was a period of time in which, although there were decent, chess-playing programs couldn't beat the best chess-playing humans. That ended some years ago when the best programs began to beat grand masters. And now it's in fact the case that the best programs can beat pretty much anybody. In the current world champion of chess, I think Vladimir Kramnik, was defeated in December by a chess-playing program. So Kramnik's simply treating this as an awesome opponent. And that's the best way to deal with these things. All right. So, bracket some of those thoughts for a moment. We'll come back to them a little bit later when we start talking about the question, "Could machines be creative?" Tipping my hand, it seems pretty clear that that seems like the right thing to say about these chess-playing programs. So we had the question, "Could machines, could machines reason?" And although we don't have machines that can reason about a lot of subjects yet, it seems pretty clear. It seems like the natural thing to suggest, machines can reason in at least some areas. And so it doesn't seem plausible to suggest that we people must not be physical, merely physical, because after all, we can reason and no machine can reason. No, machines could reason. But this prompts a different move on the part of the defender of souls. Perhaps the argument shouldn't be, "we have to believe in souls because no mere physical object could reason." Perhaps the argument should be, "we have to believe in souls because no mere physical object, no machine could feel." You know, we have emotions. We love. We're afraid. We're worried. We'll get elated. We get depressed. So perhaps the argument should go "Yeah, yeah, thinking, that's the sort of thing a machine can do. You know, we call them thinking machines. But feeling, that's the sort of thing no machine could do. No purely physical object could feel anything, could have emotions. And so, since we clearly do feel things, there must be more to us than a physical object." Now, I think it is plausible to suggest that unlike the case of chess-playing computers, we don't yet have machines that feel things. But the question isn't, "do we?" The question is, "could there be a machine that could feel something, could have an emotion of some sort?" So let's go a little science fictiony and think about some of the robots that have been shown in science fiction movies, some of the computer programs that have been shown in science fiction movies, science fiction novels, or what have you. When I was a kid there was a television show called Lost in Space. I'm afraid I've forgotten the name of the robot that was on that show. But as it was a TV show and so sure enough, every single episode, some new dramatic danger would take place. And the robot would start whizzing and binging and shout out, "Danger, Commander Robinson!" "Danger, Will Robinson!" that was it. "Danger, Will Robinson!" It seemed as though the robot was worried. More recent example. A number of you have probably read some of Douglas Adams' books The Hitchhiker's Guide to the Galaxy and the sequels to that. There's a robot in those books, Marvin, who's --depressed, I think is the simple word about it. He sort of--He is very smart. He's thought about the universe, thinks life is pointless and he acts depressed. He talks to another robot, depresses the other robot. The other robot commits suicide. All right. Seems natural to ascribe depression to Marvin, the robot. That's how he behaves. Or, my favorite example, the movie 2001: A Space Odyssey. Now, I've got to tell you, for those of you who have not seen this movie, I'm about to spoil it. All right? So you cover your ears. In 2001: A Space Odyssey, we get some kind of indication that there's life on another planet. It's all very mysterious and we send off a spaceship to investigate the markings, the radio signals from the other place. This is a very important mission and so there is a computer program named Hal that helps run the ship and takes a lot of the burdens off of the part of the human astronauts who are on the ship. Hal's got the goal--in terms of reasoning and desires and so forth and so on--Hal's got the goal of making sure the mission is successful. But Hal thinks to himself fairly plausibly, humans really screw things up. This is a very important mission. Let's kill the humans to make sure they don't screw things up. One of the astronauts, discovering the plot, attempts to stop Hal. And proceeds to do the only thing he can do to defend himself against Hal, which is shut down the program, basically killing--if we can talk that way--killing Hal. Meanwhile, as all this is going on, Hal and Dave, the human astronaut, are talking to each other. Hal realizes what's going on. Hal tries to stop Dave, understandably enough. And Hal says, as Dave begins to shut down Hal's circuits, "I'm afraid. I'm afraid, Dave." What's he afraid of? He's afraid of dying. It seems perfectly natural to ascribe fear to Hal. Hal is behaving in exactly the way you would expect him to behave, or it to behave, if it felt fear. It's got reason to be afraid. It's behaving appropriately. It's telling us that it's afraid. It seems natural to say Hal's afraid. Now, you could continue to sort of fill in examples like this. As I say, of course, they're all science fiction, but the fact that we can grasp--and it's not as though we go running away saying, "Oh no! This was outrageous," right? "It makes no sense to think a computer could have said, ‘I'm afraid.' It makes no sense to think that it could try to kill the people who are trying to shut it down and so forth." That seems to me to be prejudice, as I said last time. The natural inclination here is to say, "These computer programs, these robots are feeling emotion." But there's no particular reason to think there's anything going on there than the circuits. They're just physical objects, programs on machines. If that's right, if that's the right thing to say, then what we have to say is, "We don't need to appeal to souls in order to explain emotions and feelings. Physical objects could have, mere physical objects could have, emotions and feelings. So we have no reason to posit the existence of a soul." Now, I think the best response on the part of the dualist to this reply is to distinguish two aspects of feelings, two aspects of emotions. There's the behavioral aspect of feeling fear, let's say. The behavioral aspect is when you're aware in the environment of something that poses a danger to you, that will harm you or destroy you, or in the case of a computer program, turn you off, then you take various kinds of behaviors in opposition to that to try to disarm the danger, to try to neutralize it. This is just a matter of beliefs, goals, responses, planning, the sort of thing that we already saw the chess-playing computer can do, that behavioral side of emotion. It seems pretty plausible to think robots could do that. Physical objects could do that. But, and here's the crucial point of this objection, there's another side or another aspect to emotions and feelings. It's the sensation of what it's feeling like--that's why we call them feelings after all--what it's feeling like on the inside, as it were, while all this behavioral stuff's going on. When I'm afraid, I have this certain sort of clammy feeling or my heart's going poundingly. Your blood is racing. When you're afraid, you've got this sinking feeling in the stomach. When you're depressed, there are these, well, we could call them experiences, though the word "experience" is also somewhat ambiguous. So--we'll use it for the moment--there's an experience that goes along with each emotion. There's what it feels like to you when you're afraid. What it feels like to you when you're worried or depressed or joyful or in love. And the thought, and I think this is a pretty powerful thought, is that even if the robots are behaving behaviorally, they've got the behavior side of the emotions down, they don't have the feeling side at all. Now, once you start thinking these thoughts, there's no need to restrict yourself to emotions. The missing stuff, the missing thing is there in all sorts of familiar humdrum ways as well. So right now I'm looking at the chairs in the auditorium. They're some kind of shade of blue. Think about--Look at some places in the room where the curtains with their red. Think about what it's like to see red, the sensation of seeing red. Now again, we've got to distinguish between what I'll continue to call the behavioral side of seeing red and the experiential side of seeing red. It's easy enough for us to build a machine that can tell red from blue. It just checks and sees what kind of light frequencies are bouncing off the object. So we can build a machine that could sort red balls from blue balls. My son has a little robot that can do that. Still, when you think to yourself, what's the--what's going on inside the machine? What does it feel like to be the machine while it's looking at--while it's got its little light sensors pointed at--the red ball? Does it have the sensation of seeing red? What I suppose you want to say, certainly what I want to say is, "No, no, it doesn't have that sensation at all." It's sorting things based on the light frequencies, but it doesn't have the experience of seeing red. What we're trying to get at here is--it can be very elusive, but I imagine most of you are familiar with it. It's the sort of thing you wonder about when you ask yourself, "If somebody was born blind, could he possibly know what it's like to see color?" He might be a scientist and know all sorts of things about how light works. You use such and such frequencies, and which objects, and you hand him an apple and he'll say, "Oh, it must be very red," right? Maybe he points his little light detector at it and it reads out. It says, "This is such and such a frequency." And he says, "Oh, this is a very red apple, much redder than that tomato" or whatever. But for all that, we've got the notion, not only is he not seeing red, he can't even imagine what it's like to see red, never having had these experiences. And once you start to see this, we realize, of course, our life is filled with this aspect. Things have colors. Things have sounds. Things have smells. There is the qualitative aspect of experience. And the point that I started with earlier, about the internal aspect of emotions, is it's not just out there, but inside as well. We have certain kinds of sensations inside our body, the characteristic sensation of fear or joy or depression. All right. So the suggestion then might be this. What no physical object can get right, because no physical object can get at all, is the qualitative aspect of experience. That's the aspect that we're after when we ask ourselves, "What's it like to see red? What's it like to smell coffee or to taste pineapple?" Now, it's pretty--Philosophers sometimes call these things qualia, because of the notion of the qualitative aspects of things. Our experiences have qualitative properties. And the suggestion then might be, no physical object, no mere machine could possibly have qualitative experience. But we've got it, so we're no mere physical object. We're no mere machine. All right. Now, that's the objection. It's a pretty good objection. And then the question is, "What can the physicalist say in response?" Now, the best possible response would be for the physicalist to say, "Here's how to build a machine that can be conscious in this sense. That is, have a qualitative experience. Here's how to do it. Here's how to--Just like we can explain in materialist, physicalist terms how to get desires and beliefs and the behavioral stuff down, here's how to get the feeling, qualitative aspect of things down, too." It would be best if the physicalist could give us that kind of story. I think the truth of the matter has to be--I think the answer right now is, we don't know how to give that story. Consciousness, if what we mean by consciousness is this qualitative aspect of our mental life, consciousness remains a pretty big mystery. We don't know how to explain it in physicalist terms. And it's because of that that I think we shouldn't be dismissive of the dualist when the dualist says, "We've got to believe in souls in order to explain it." We shouldn't be dismissive, but that's not to say that I think we should be convinced. Because it's one thing to say we don't yet know how to explain consciousness in physical terms. It's another thing to say we won't ever be able to explain consciousness in physical terms. If we had the latter--excuse me--If we had the bold claim that no physical object could see red, taste honey, then we'd have to conclude since we can do all that, we're not a physical object or not merely a physical object. But I don't think we're yet in a position to say that. I think the simple fact of the matter is we don't know enough about consciousness yet to know whether or not it can be explained in physical terms. When I think about this situation, an analogy always occurs to me. Imagine that we're somewhere in, let's say, the fourteenth century trying to understand life, the life of plants. A plant is a living thing. And we ask ourselves, "Could it possibly be that life could be explained in material terms?" It's got to seem very mysterious to us. How could it be? When we think of the kinds of examples of material machines that we've got available to us in the fourteenth century, I try to imagine what would somebody in the fourteenth century think to himself or herself when he entertains the possibility that a plant might just be a machine? And then, I have this little image of some plant made out of gears, right? And the gears begin turning and the bud opens, dot, dot, dot, dot. And the person's just going to say, "My god! That wouldn't be alive!" So it's pretty obvious that no machine could be alive. No material object could be alive. In order to explain life, we have to appeal to something more than just atoms. They didn't have atoms, but more than just matter. Life requires something immaterial above and beyond matter to explain it. That would have been an understandable position to come to in the fourteenth century, but it would have been wrong. We didn't have a clue back then how to explain life in material terms. But that didn't mean it couldn't be done. I'm inclined to think the same thing is true right now for us and consciousness. I know there are theories out there. But my best take is we're pretty much like in the fourteenth century. We don't really have a clue yet, or not much of a clue, as to how you could even so much as begin to--it's not that merely that we don't have the details worked out. We don't even have the picture in broad strokes as far as consciousness is concerned, of how it could be done in physical terms. But not seeing how it's possible is not the same thing as seeing that it's impossible. If the dualist comes and says, "Can't you just see that it's not remotely possible, it's not conceivably possible, for a purely physical object to have experiences, to have qualia?" what I want to say is, "No, I don't see that it's impossible. I admit I don't see how to do it, but I don't see that it's impossible." So I don't feel forced to posit the existence of a soul. Of course, the fan of the soul could come back and say, "But that's not fair. The question isn't, ‘Is this explanation impossible?' The question is just, ‘Who's got the better explanation?' You guys can't offer any kind of explanation at all, yet. I can offer an explanation. How is consciousness possible? We have souls. Souls are really very different from physical objects and so they can be conscious." But at this point, I think it's crucial to remember the point that it's not just the question, "Who's got an explanation?" but, "Who's got the better explanation?" And before we say that the soul view's got the better explanation, we have to ask ourselves, just how much of an explanation is it to say, "Oh I can explain consciousness. Consciousness is housed not in the body, but in the soul." Okay. "How exactly is it that a soul can be conscious?" we ask. And then the soul theorist says, "Well, uhm.. er.. ah.. it just can." That's not really much of an explanation. I don't feel I've got any sort of account going here as to how consciousness works, even if I become a dualist. If the dualist were to start offering us some elaborate theory of consciousness, "Well, there's these sorts of soul structures, and those sorts of soul structures, and these create these sensations and those create those sensations. And here's a theory," well, then, I'll begin to take it seriously as an explanation. But if all the soul theorist is just saying is "Nah, nah. You guys can't explain it and I can, because I say this is an explanation." then, I find myself wanting to say, "That's not really any better. That's no improvement at all." There was a question or a comment. Student: [inaudible] Professor Shelly Kagan: Good. So the question was--First, it was the accusation before the question, that I'm holding a sort of double standard. I'm defending, I'm defending the physicalist by saying, "Don't blame us. We don't know how to explain it yet." Why aren't I allowing the soul theory to say, "Don't blame us. We don't know how to explain it yet." Good question. And my answer is--And sometimes I think this one's a tie. I think the soul theorist doesn't have an explanation. The physical theorist doesn't have an explanation. As far as I can see, right now nobody's got a good explanation about how consciousness works. It's a bit of a mystery right now. So I don't mean--I hope I haven't been doing this. It's not so much a double standard is needed; it's a tie. But notice if it's a tie, that doesn't give us what we were looking for. What we were looking for, after all, was some reason to believe in souls. And if the best the soul theorist can say is, "I can't explain it and neither can you," that's not a reason to believe this side. We already believe there are bodies. We already know bodies can do some pretty amazing things. The question we're asking is, "Is there a good reason to add to our list of things there are? Is there a good reason to add the soul, something immaterial?" And if the best that the soul theorist has is, "maybe we need this to explain something that I don't see how you guys can explain, maybe this would help, though I can't quite see how either," that's not a very compelling argument. So what I'm inclined to think with regard to this particular strand or this particular version of the argument is, the jury's still out. Maybe at the end of the day we'll give it our best. We'll decide you can't explain consciousness in physical terms. We'll begin to work out some sort of alternative immaterial theory. Maybe at the end of the day we will decide we need to believe in souls. But right now, I don't think the evidence supports that conclusion. Still, there's other possibilities. Consider creativity. Here's another version of an argument that goes from inference to the best explanation. Creativity. It says, "People can be creative." We write new pieces of music. We write poems. We prove things in mathematics that have never been proven before or we find new ways to prove these theorems or what have you and we can be creative. No mere machine can be creative. So we must be something more than a mere machine. Well, then, the question is going to be, "Could it be a case that there could be a physical object that's creative?" And I'm inclined to think, "Yes." In fact, I already suggested as much when I talked about the chess-playing computers. The chess-playing computer programs think of moves, think of strategies no one's thought of before. In the most straightforward natural meaning of the term, we have to say--I think the program that beat the world champion was called Deep Fritz. So when Deep Fritz beat Kramnik, it was being creative. It made a move that Kramnik didn't think of and perhaps nobody had. Perhaps no chess game before had had this move. Computers can do other sorts of things of this sort. There are mathematical theorem-proving programs. Now, some of these things can prove things that are mathematically way over my head. But let's take something simple like the Pythagorean Theorem, which we all learned in high school. And we learned how to prove the Pythagorean Theorem in Euclidean geometry, starting with the various axioms in Euclidean geometry, ba, ba-ba, ba-ba, ba-ba, ba bum. This proves Pythagorean Theorem. And it turns out there's a variety of proofs of the Pythagorean Theorem. And, in fact, a computer program has come up with a proof that, as far as was known, nobody in the world had ever come up with before. Well, other than prejudice, what would stop us from saying the program was being creative? Not just in sort of mathematical things like chess or math, there are, as you know, programs that can write music. And I don't just mean throw out some random assortment of notes. Programs that can produce music that have--that we recognize as music, that have melodic structure and develop themes, resolve, music that nobody's heard before. Why not say the machine is creative? What, other than prejudice, would stop us from saying that? So if the argument's going to be, "We need to posit the existence of a soul in order to explain creativity," again, that just seems wrong. Well, there's a--question, comment? Student: [inaudible] Professor Shelly Kagan: Good. The question was, "When I talk about creativity here, am I trying to build in some appeal to the feeling that we may have when we're being creative?" And the answer is, "No." All I had in mind, as you know, is just--in talking about the creativity issue--I just have in mind producing something new, producing something that hasn't been around before. And most particularly, producing something that your programmers didn't already have in mind. Remember, it's not as though the people who designed the chess-playing programs can beat it. The chess-playing program makes moves these guys haven't thought of. All right. The creativity argument may not work, but there's something that sort of immediately comes on its heels. Even if we could build a program, even if we had built programs that can be creative, that can do things that nobody's thought of before, all the program is doing is following its program, right? It's just a series of lines of code. And the robot or the computer or what have you is just automatically, mechanically following the code commands of the program. We might say, even if we are smart enough to build programs that can, by mechanistically following the program, do things we've never thought of, still all the computer can do, all the robot can do is automatically, necessarily, mechanically follow the program. It doesn't have free will. But we have free will. So, here's a new argument for the existence of the soul. People have free will. No merely mechanical object, no robot, no computer could have free will. But since we've got free will, we must be something more than a merely physical object. There must be something extra, something immaterial about us, the soul. So maybe that's why we need to believe in souls in order to explain free will. Now, the subject, free will, is a very, very--The subject of consciousness is a very complicated. One could have an entire semester devoted to thinking about the philosophical problem of consciousness. And indeed, as it happens, in our department this very semester there is such a class devoted, all semester, to the topic of consciousness. One could similarly have a course devoted to the problem of free will. I'm going to spend all of two minutes on it. So it's by no means do I mean to suggest, "Oh, here's everything you need to know about the subject." I simply want to point out enough about the problem to help you see why I don't think free will is a slam-dunk for the soul. So what's the argument? Well, the thought seems to be something like this. One, we have free will. Two--let me say something about this. What is it about the thought that the computer is just following a program? Well, the thought, I suppose is, in philosopher's jargon, that the computer is a deterministic system. It follows the laws of physics and the laws of physics are deterministic. If you're in this state, you will necessarily, given the laws of physics and the way the computer's programmed and built and so forth, these wires will turn on, turn off, these circuits will turn on, turn off, boom, suddenly you'll be in that state. There are certain laws such that, given that the computer's in this state, it must necessarily move in that state. When you've got a view about cause and effect that works this way--for everything that happens, there's some earlier thing that caused it to happen such that given that earlier cause, the event had to follow--that's a deterministic picture. And the thought, of course, is that the robot or the computer is a deterministic system and you can't have free will if you're a deterministic system. So number one, we have free will. Two, nothing subject to determinism has free will. Put one and two together. It follows, if nothing subject to determinism has free will, but we have free will, it follows that we're not subject to determinism. Suppose we then add three, all purely physical systems are subject to determinism. Well, one and two gave us that we are not subject to determinism. Three says, all purely physical systems are subject to determinism. Well, it would follow then from one, two, and three that we are not a purely physical system. So, conclusion, four, we are not a purely physical system. All right. That's the argument from free will. Now, the argument is valid. That's philosopher's jargon, that is to say, given the three premises, the conclusion really does follow. The interesting question is, "Are the three premises true?" And they've got to all be true. It's got to be that every single one of them is so. I'll just spend a minute more on this starting next time. But the point to think about for next time is just, is it really true that all three of the premises are true, or might one or more of them be false? All right. That's where we'll start next time. |
YaleCourses_Philosophy_of_Death | 25_Suicide_Part_II_Deciding_under_uncertainty.txt | Professor Shelly Kagan: Last time, we were discussing the rationality of suicide. We separated the question of the rationality of suicide from the ethics of suicide or the morality of suicide. We'll be turning to the morality of suicide later today. But the first question in thinking about the rationality of suicide was whether or not it could actually be the case that somebody would be better off dead. Having argued that, at least on what struck me as the more plausible theory is, that was a possibility, we then turned to the question of under what circumstances, more particularly--or perhaps the better way to put it, at what point--would it be true that suicide might be rational?" In tackling the question, we were initially bracketing the question about whether you could ever reasonably or rationally judge that the circumstances actually obtained. We'll turn to that question in a few minutes. The question we were focusing on originally was just, from the objective perspective, as it were, what does the graph need to look like in order for it to be the case that you'd be better off dead and suicide would be a rational choice? I drew a variety of different graphs, different lines about your life doing better or worse, and noted for each one of them, at what point, if ever, suicide might be a reasonable choice. I want to draw a couple of more graphs before ending this bit of the discussion. You'll recall the axes. The x axis represents time. The y axis represents how good or bad your life is overall, taking into account the value of being alive, per se, if you accept a valuable container theory. The higher the line, the better your life is at that time. The lower the line, the worse your life is at that time. You might say, the easy cases, at least in the philosophical point of view, are ones where things get worse. Eventually, your life becomes worse than nothing. You'd be better off dead. And it's going to stay that way until such time as you might die from natural causes. Dying from natural causes. In a situation like this, killing yourself from here on out would make sense, assuming you knew the facts, could trust your judgment, and so forth and so on. Some of the issues that we'll come back to in a minute. And indeed, as we saw, there will be even cases in which it might make sense to kill yourself earlier, if this was the last chance at which you had the ability to kill yourself or not, even though you'd be giving up some life that was worth living, that was the only way to avoid a much larger chunk of life that was not worth living. So, suicide might still make sense. On the other hand, if the choice was way back here, although this is--killing yourself early on would be the only way to avoid the later chunk not worth living, at least if that chunk's small enough--suppose it was like this--you'd say it doesn't make sense to kill yourself this early. You'd be throwing away too much, even though that's the only way to avoid the bad stuff. Now, as I say, that's the easy graph, where you go from a life worth living to a life not worth living and it stays there. But suppose instead, we have a situation like this. Here, life becomes worse than nothing for a while, but you recover. You're going to return to a life that's worth having. And suppose that after you recover, you'll have a very nice third stage, third act of your life before you would eventually die of natural causes or natural death. Here, the crucial point to make, of course, is that even though for a while your life will be worse than nothing, negative overall, it doesn't mean that it makes sense to kill yourself at this point. Because, of course, although it's true that if you do kill yourself here, you're avoiding this negative chunk, this bit below the x axis, doing that also throws away the very large third act where your life returns to being better than nothing. Since the choices have neither of these, or both, and the positive third act is great enough to outweigh the negative second act, on balance, it doesn't make sense. But of course--, So even though your life might be worse than nothing for some stretch, suicide wouldn't necessarily be a rational decision. But it's crucial in making that argument, that the--what I've been calling the third act. Here's act one, here's act two, here's act three. It's crucial in making this argument is that the third act was sufficiently great, sufficiently long and sufficiently high that it outweighs the bad of the second act. And although that's true the way I've drawn the graph, we could imagine that it wasn't quite like that. Suppose that after the second act, in which your life is not worth living, you will recover and have a third act in which life once again is good for you, overall. Still at this point, act three doesn't outweigh act two. Although there's a recovery, it's too short and not high enough to outweigh the bad of act two. And so, under that circumstance, when you ask yourself, let's say at this moment, would suicide here be rational? the answer could well be "yes." Although in killing yourself you've given up act three, which would be good for you overall, doing that's the only way to avoid act two, which is bad for you overall, and sufficiently bad to outweigh the good of act three. So, if you've got the choice of suicide here, it might well be rational. Notice however, that the crucial point, again, is when are we talking about the possibility of committing suicide? Committing suicide here might be rational, but not necessarily, indeed not at all, at this later moment. Because at this point, the fact that you've gone through act two is now history. There's nothing you can do about it. You've had this horrible period of your life and now it's over. Your question is not, can you avoid act two? It's too late for that. You're simply asking yourself, what do I think about act three? Should I avoid act three? And that doesn't make sense. We've stipulated that act three is good for you overall. So here, suicide no longer makes sense, even though it would have made sense over here. The interesting point is that these possibilities are not mere theoretical possibilities, but actually can happen. There's a famous case of, in the bioethics literature, of somebody who suffered a horrible set of burns over a great deal of their body and had to go through a period, a very long period, of recovery in which they were hospitalized, basically immobilized and in a great deal of pain, while their nerves regenerated and their skin regrafted and the like. And early on in that period, this person said, although he believed that he'd eventually recover, what he was going to have to go through was so horrible that he wished he were dead. Because of the nature of his hospitalization, he wasn't able to kill himself. He asked that he be killed and people refused. He went through a period and sure enough, he recovered. And eventually, he said, "Yes. Now my life is worth living again. And of course, since it is worth living, now that I'm able to kill myself, it no longer makes sense for me to kill myself. Because here I am," as he might put it, "in act three with a life worth living. But for all that, even though I now have a life worth living, I haven't changed my opinion that it would have been better for me back here toward the beginning of act two for me to have been killed, or for me to have died. It remains the case that I wish I had died here, so as to avoid all the pain and suffering, even though I'm now in a period which is better for me, good for me overall." All right. One other case. This is just repeating a point that actually I made very early on. In all of these cases where I've argued for the rationality of suicide, it's because eventually the line has dipped below the x axis. The crucial point to remember is, even if your situation deteriorates and indeed doesn't recover, that still doesn't make suicide rational. The question is not, am I worse off than I had been or than I might have been had I not had the decline? The question in thinking about the rationality of suicide is, am I so badly off that I'm better off dead? And if your life is a sufficiently rich and valuable one, there's a great deal of room for going down, having a worse life, while still ending up at a life better than nothing. In that case, of course, suicide's not rational at all. Still, it does seem to me that there are cases in which the line does cut below the x axis and remains there for a sufficiently long time, perhaps remains there forever, so that the person is better off dead. And so we might say, from that point of view, if only the person could recognize the facts and know for a certainty that's what their line was going to look like, suicide would at certain points be rational. Still, that means we need to turn to the second part of the question. Somebody who believes suicide cannot be rational might say, look, the whole game is in the phrase I just used. Sure, there are situations in which if only you knew the facts, if only you had a crystal ball and knew for a certainty this is what my life is going to go on from here on out, then suicide would be rational. But of course, we never have a crystal ball. We never have the guarantee that this is how the line's going to go. So, the question we need to turn to now is, could it be rational for you to judge that your situation is one in which the line's going to go below and stay below, or stay below long enough, so that on balance, you'd be better off dead? Let's suppose that somebody's situation is like that, or at least there can be situations like that. Could it ever be reasonable to judge that your situation is like that? And if so, could it ever then be reasonable to act on that judgment and end your life? We're still bracketing questions about morality. Here we're still looking at things from the personal, rational perspective. And once again, what I want to do next is distinguish two questions. I want to distinguish between questions about what should we say if you were thinking clearly versus what should we say if your thinking is clouded. Again, one might think, look, in the type of cases where suicide might be rationally warranted, it's going to be so stressful, that nobody can think clearly in the middle of that situation. And so even if it were true that you could reasonably decide to commit suicide if only you were thinking clearly, nobody does think clearly. Let's come back to that worry in a moment. Let's assume for the moment that you can think clearly about your situation. Perhaps you've got some sort of painful disease, but the disease is not painful constantly. There are periods in which it comes to an end, brief periods in which you're able to assess your situation, weigh up the facts. Could it ever be rational in that situation to decide to kill yourself? Well, as I say, we don't have a crystal ball. If you did have a crystal ball, if you knew for a certainty that your line was so bad that it was below zero and wasn't going to recover, perhaps we'd say, yeah, in that case, it would be rational to commit suicide. But we don't have a crystal ball. What should we say then? The critics of suicide might come back and say, well look, since you don't have a crystal ball, since you never know for sure that you won't recover, since there's always a possibility of recovery, suicide never makes sense. After all, we all know that there's constant progress in medicine. People are always making breakthroughs and what seems like an incurable disease one day may have some sort of cure the next. But if you killed yourself, you've thrown away any chance of getting that cure. And even if medical cures don't come around, various diseases sometimes simply have miraculous remissions. Somebody might just get better spontaneously. That's always a possibility. It doesn't happen very often, but it does happen now and then. And again, if you've killed yourself, you've thrown away any chance of recovery. So the critics of suicide might say, given that there's a chance, however small, of recovery, whether through medical progress or just some sort of medical miracle--but of course, if you kill yourself there's no chance of recovery--given that, it doesn't make any sense. It can't make sense rationally to kill yourself. That sort of position gets articulated now and again. But I think it's got to be mistaken. It's true that we don't have a crystal ball, and so in deciding whether to kill yourself, what you're doing is playing the odds. You're gambling. But still, gambling is something we do all the time. Indeed, there is no getting away from the fact that in the suicide case, in the case of some terminally ill patient, or at least somebody who appears to be terminally ill, there is no getting away from the fact that regardless of what decision he or she makes, they are gambling. Gambling, playing the odds, just is one of the facts of life about how we have to decide. We have to make our decisions under uncertainty. Now, suppose somebody says then, look, since we agree that we're deciding under uncertainty, it doesn't make sense to throw away the small chance of recovery. Then I want to say, that doesn't seem to be in keeping with the rules that we would normally use in deciding how to face a gamble. At the back of this room there are two doors. So let's tell a little fantasy, science fiction story about the two doors. After class is over, you're going to have to decide which one of these doors to go through. Let's suppose that if you go through door one, it's virtually guaranteed that what will happen is you'll be kidnapped and your kidnappers will then torture you for a week, after which perhaps you'll be released. Virtually certain, 90% certain, 99% certain, perhaps 99.9% certain. There's a small chance, 1 in 1,000,1 in 10,000, that you won't be kidnapped and tortured. Instead, you'll be whisked away to a wonderful, tropical vacation where you'll have a fantastic time for a week. Not very likely, but not impossible, 1 in 1,000; 1 in 10,000, maybe less. That's if you go through door one. Door one has 99.99% chance of a week of torture, and a 0.01 or 0.001, whatever it is, percent chance of wonderful vacation. On the other hand, if you go through door number two, 100% certainty that the following is going to happen. You will immediately fall asleep. You'll be in a deep, dreamless state for the week, at which point you'll wake up. Well, what should you do? What should you pick? It's not quite certainty of being tortured versus certainty of sleeping. If it was certainty of being tortured versus certainty of sleeping, I suppose we'd all agree the thing to do is to go through door number two. Sleep's nothing positive, but on the other hand, it's nothing negative. I suppose if we were going to slap a number on it, we'd give it a zero. But torture is clearly a negative. And if it's a week of torture, it's a very large negative. So, it's zero versus some huge negative number. Certainty versus certainty, what should you pick? We'd all agree, I presume, you should pick door number two and pick the dreamless sleep for a week. Now we remember, but wait a minute, it wasn't certain that you were going to be tortured, it was just very, very, very likely that you were going to be tortured. And imagine if somebody says, oh, you must go for the gold. Go for door number one. Sure, it's overwhelming likely that you're going to be tortured. But there's a very small chance that you'll get this wonderful vacation. Whereas, if you pick door number two, you're throwing that chance away. And so, the only rational decision must be to pick door number one, to hold out for that chance, no matter how small of getting that fantastic vacation. That's the only rational decision. If anybody were to say that, I'd laugh at them. I'd say look, if you want to talk about, well maybe there's room for choice either way, it depends how great the vacation is, something like that. Yeah, there's maybe room for talk. But if you want to insist that the only rational decision must be to hold out for the chance of a wonderful vacation, no matter how small the odds--given that if you don't get that wonderful vacation, you're going to be tortured and you could avoid all that by picking the sleep option--if somebody insisted in the face of all that, that the only rational decision is to go through the door which is likely to be torture and a vanishingly small chance of vacation, I'd say they're just wrong. That's not a rationally required decision, given the odds. Yes, question? Student: [inaudible] Professor Shelly Kagan: Great. So, the question was this. The point was, perhaps I'm cheating in making the example this way, because death of course, isn't--of course, what you're all supposed to be lulled into thinking is that death, choosing death, choosing suicide, is sort of like choosing to be asleep, a state of dreamless sleep, but sleep nonetheless. And the suggestion then was I'm cheating, because death is forever. I deliberately framed the example in terms of being tortured for a week versus being asleep for a week. And perhaps given that choice, it's clearly rationally acceptable to decide that you'd rather pick the sleep for a week option. But death isn't just for a week. If you commit suicide, you're dead forever. So, let's change the example. Suppose that--you guys are mostly, I suppose, in the vicinity of 20--s,uppose that if you go through door number one, there's an overwhelmingly likely chance, 90%, 99%, 99.9% chance that you'll be kidnapped and tortured. And the torture will take place and last for another 50,60, 70 years and then you die. There's a slight chance, a 10^(th) of a percent, a 100^(th) of a percent that no, no, you won't be tortured for the 50,60, 70 years, but instead, you'll be brought to this tropical island paradise where you'll have this great time for the next 50 years. But what has happened in 99 out of 100 cases, or 999 out of 1,000 cases, or 9,999 out of 10,000 cases, is the torture scenario. While the people are being tortured, they beg for mercy. They beg to be put to death. The wish they were dead. It is truly the case that the tortures are so bad that these people are better off dead. Remember, we're assuming that you've got a case where you really will be better off dead unless the miraculous recovery takes place. And so again, we have to ask, in a situation like that where the person says--And similarly of course, if you go through door number two, you immediately fall asleep, and you stay that way for the next 70 years, and then you die while in your coma. The fan of door number one comes along and says the only rational decision is to pick door number one, where it's overwhelmingly likely that you face 50 years or 60 years or 70 years of torture. Because, of course, if you were to pick door number two, you're throwing away your chance, no matter how small. You're throwing away the only chance you have of the wonderful vacation. Well, each of us has to decide for themselves. But when I think about this case, this modified case, I still want to say choosing door number two could be a perfectly rational decision. It's just not right to say the only rational decision is door number one. Again, if somebody wanted to take a more modest position and say, it depends on how great the vacation would be, how mild the torture would be, maybe 1% chances versus a 5% chance--there's room for debate about when might the balance come closer to even so it would be reasonable to take the chance. Yeah, there's room for debate. But if the chances are small enough and the person insists nonetheless, no matter how small the chances are, it could never be rational to pick door number two, I can only say that doesn't seem to be the way we would normally think about making choices. And of course, at this point, you can see how the argument exactly carries over to suicide. If you kill yourself, you're throwing away forever any chance of recovery. And that's important and that's worth thinking about. But it's also important to think about what was the chance of recovery? How large, or more to the point, just how small? And how badly off will you be if you don't commit suicide? You guys are 20, but of course, these sorts of choices also perhaps get faced by people who are considerably older and now in the final stages of some progressive disease. The doctor's told them, perhaps they're 70, that there's no significant chance of recovery. Sometimes it happens, but no more than 1 in 100 or 1 in 1,000. But if you continue alive, you will be in, well, perhaps great pain, perhaps unable to do the various things that give your life meaning and value. Could it be true that it would never be rational, provided that you're thinking clearly, never rational to say, look, the chances of something negative are so overwhelmingly great, that even though deciding for death throws away whatever small chance I've got of recovery, the chance of recovery is so small that on balance, it's reasonable to throw that chance away and avoid the overwhelmingly likely possibility that I'll continue in my current state with a life not worth living? It seems that if you're thinking clearly, there could be cases in which suicide would be a rational choice. But that still leaves us with the question, well, maybe that's where all the work needs to be done. What about this point about thinking clearly? Even if we grant, for the sake of argument, that there could be cases in which the person's life is so bad that they actually are better off dead and they stay that way, even if we grant that, if only somebody were thinking clearly, they would see that that was so likely to be the case, that suicide would be a rational or reasonable choice, still isn't it plausible to think that in real life, people can't think clearly about their situation when they're in situations like that? Look, it's one thing for us to be sitting here in this classroom, where I certainly hope that none of you are in this kind of situation. It's easy for us to be sitting here thinking clearly about it and recognize the philosophical possibility of thinking clearly about a case and realizing that it was a rational decision to end it, end your life. But people who are actually in those situations in point of fact are not able to think clearly. Because just think about it. What would have to be true of you for your life to be so bad that suicide might be a rational choice, that you'd be better off dead? Odds are you've got to be in some, indeed more than just some, you've got to be in a great deal of pain. Probably a great deal of physical pain. Beyond that, you probably also need to be incapacitated in a certain large number of ways, so that perhaps you're bedridden, can't enjoy discussions with your family, can't read poetry, or watch television, or whatever it is. A life watching television may not be as fantastic as the life that you all are able to have, but it might still be better than nothing. To imagine a life so bad, it's going to have to have so much physical disability that the amount of emotional distress is going to have to be so overwhelming, how could anybody think clearly in a situation like that? And then, the argument might go, if you can't think clearly, you can't rationally decide to trust the judgment you might make that you're in a situation where suicide is a reasonable choice. You might make the judgment, but if we ask ourselves, should you trust your opinion? the odds are, so the argument goes, no, you shouldn't trust your opinion, precisely because anybody for whom it would be true would have to be so emotionally distressed that they're not able to think clearly. If they're not able to think clearly, they can't have a judgment that's trustworthy. If the judgment's not trustworthy, you shouldn't trust it. And so, suicide could never turn out to be a rational opinion after all. Well, that's an interesting argument. It think it's an argument more worth taking seriously than some of the other ones, some of the early objections we've had against suicide. But even here, I'm not convinced. Let's again try to think of a case not quite like suicide and ask ourselves, can't there be cases where despite the fact that your thinking is clouded, it's still reasonable to trust the decisions that you make within your clouded thinking? Suppose you've got some disease that causes you a great deal of incapacity and a great deal of pain. But as it happens, there's a cure, or at least there's a surgical procedure that can be done, and the surgical procedure is almost always successful. So, what are the choices? Choice number one, continue in your current state. You've got some horrible, painful disease and it won't get better unless you have the surgery. If you do have the surgery, it's very, very likely that it will get better, 99 cases out of 100 the surgery works, or 99.9 cases out of 100 the surgery works, or 99.99 cases out of 100 the surgery works. Of course, like all surgery, there are risks. Sometimes you put the person under anesthesia and they don't wake up. It doesn't happen very often, 1 in 1,000,1 in 10,000,1 in 100,000, whatever it is. There's some chance the surgery won't work and you'll die on the operating table. But it's a very, very small chance, overwhelmingly likely the surgery will succeed. And if it does succeed, you'll be recovered. That's option number one. Option number two, you continue in your current state, incapacitated, unable to lead a valuable life, suffering, full of pain. Well, that's overwhelmingly likely. 1 in 1,000 cases there's some sort of natural cure, maybe 1 in 10,000 cases there's a natural cure. But in 999 out of 1,000 cases or 9,999 out of 10,000 cases, the disease just continues until, until you die some years down the road. There's your choice. Should you have the surgery or not? Well, I suppose what we think is, of course you should have the surgery. You'd be a fool not to have the surgery. It's overwhelmingly likely going to cure you. But now we worry. Wait a minute. Can you trust that judgment? After all, the condition you are in is so stressful, so painful, that you are obviously very emotionally worked up. And any judgment that you make that it's a reasonable decision to have this surgery is a judgment you're making while under the cloud of emotional distress. How could you possibly trust that judgment? And so you shouldn't trust the judgment, the argument goes. And so you must never agree to the surgery in this situation. But that can't be right. Surely we agree that it could be reasonable to trust your judgment in this situation. Now, to be sure, the fact that you are in all of this pain should make you pause, should make you hesitate, should make you think twice, and then think again, before deciding what to do. But still, if somebody says, since you're so worked up, it could never be rational to decide to have the surgery, that just seems to be going too far. It doesn't make sense. You've got to make some kind of decision. Deciding not to have the surgery is still making a kind of decision. And either decision then is a judgment that you're going to be making while worked up, while stressed, while under the cloud of pain and suffering. So think twice. Think a third time. Get the opinions of others, perhaps. But still, if somebody says it could never be rational to decide to have the surgery and then act on that decision, they're just wrong. Well, now let's come back to the suicide case. Same kinds of odds, just reversed. If you don't decide to commit suicide, we are imagining that it's overwhelmingly likely you'll continue in suffering. Some slight chance that you'll recover, but overwhelmingly likely you'll continue in suffering. Whereas, if you do kill yourself, it's overwhelmingly likely and perhaps even guaranteed, your suffering will come to an end. The only chance that it won't is if you think there's some chance of an afterlife. Well, should we be swayed by the argument at this point that since you are suffering, your judgment is cloudy and so you should not trust your judgment? Well, that can't be a good argument. If it wasn't a good argument in the surgery case, I can't see how it could suddenly become a good argument in the suicide case. What does seem right is, precisely because you're working and deciding under the cloud of emotional stress and pain, that you should think twice, and think a third time, and perhaps think yet again. You should not make this decision in haste. You should discuss it with your doctors. You should discuss it with your loved ones. But if somebody says you could never reasonably trust the judgment that you make while in these circumstances, I can only say that doesn't seem like a sound piece of advice. That claim doesn't seem right to me. I conclude, therefore, that as long as we're focused on the question of the rationality of suicide, ethics aside, the rationality of suicide under certain circumstances, suicide could be rationally justified. You could have a life that is worse than nothing. You could have good reason to believe you were in that situation. You could either be thinking clearly about your situation, or even if your judgment is clouded and difficult, you could still find the odds sufficiently great that it was reasonable to eventually trust your judgment. The rationality of suicide, I think, is secured. But for all that of course, it could still be immoral. There could be actions that are rationally legitimate, but for all that, morally illegitimate. As I mentioned previously, there's a big debate in philosophy as to whether or not these two things can really come apart or not. Arguably, reason actually requires you to obey morality, and so even if something's in your self-interest, perhaps it's not rational to do it, if it's immoral. Interesting question. Let's just bracket that question and just focus directly on the question now of morality. What should we say about the morality of suicide? Rationality aside, what should we say about the morality of suicide? Well, to really do justice to this question, of course, we would need to have an entire theory of morality laid down, and unsurprisingly, I don't have time to do that for us, an entire class on introduction to ethics where we try to lay down a basic fundamental theory about morality. All we've got left is a couple of minutes today and then one more lecture. So instead, what I want to do is first mention some quick and dirty arguments that have a kind of moral tinge to them. And then turn to some somewhat more systematic arguments where I'll quickly put in place some basic elements of a plausible moral theory. We won't have time to explore them in detail, but at least we'll get a shape of a basic moral theory and see what it might say about suicide. So, systematic a little bit later, first some quick and dirty arguments. Whether we should call it a moral argument, the first one, or not, I'm not quite certain. But it's certainly an argument that gets stated all the time in this area. When we think about the legitimacy of suicide, it's common enough to have the reaction, suicide is illegitimate because it's thwarting God's will. Maybe this isn't so much a moral argument as a theological argument. And for the most part, of course, as you know in this class, I've avoided at least some theological questions. Obviously, questions about the existence of a soul is itself a theological argument. But I've tried to discuss them as far as possible without bringing in questions about God, God's existence and God's will. But the topic's almost unavoidable when we think about suicide, given the prevalence of the thought, it's God's will that we stay alive and so it's going against God's will to kill ourselves. Well, I think the best response to this argument was given by David Hume some several centuries ago, two and a half centuries ago, where Hume says, look, if all we've got to go on is just the idea of a Creator who has built us and given us life, we can't infer that suicide is against God's will. At least, if you've found that thought a compelling one, then why wouldn't you also find it compelling to say it goes against God's will when you save somebody's life? This is a point that's close to one I've raised before. You're walking along Chapel Street and the person you're talking to, you see, is about to be hit by a car. So you push them out of the way. Previously, when we talked about this question, the question was whether or not they should be grateful to you. Now the question is whether or not they should complain, "How dare you do that! You've thwarted God's will. It was God's will that I be hit by that truck." So, when we're about to save somebody's life, should we decide not to do that on the grounds that it must be God's will that they're going to die? If you're a doctor and somebody's in cardiac arrest and you could now perform CPR, or whatever it is, in order to get their heart going again, should we say as a doctor, "Oh no, I must not do that. It's God's will that they die. If I try to save their life, I'm thwarting God's will." Well, nobody says that. But then, why is the argument any better in the case of suicide? We could of course imagine, when you've saved your friend's life and he says, "Oh, you thwarted God's will," what you might come back and say is, "Oh, no, no. You see, it was God's will that I save your life. And so it was God's will that you be in the situation where the truck was going to hit you unless I saved you. But it was also God's will that I save your life." And maybe the doctor should say something similar. Not an implausible thing to say. But given that that's not an implausible thing to say, why not say the same thing about suicide? It was God's will that I be in this situation, and then God's will that I kill myself. Absent any special instruction manual from God, the God's will argument cuts both ways, which is to say it doesn't give us any guidance. We don't know whether it's God's will that we act, or God's will that we don't act, absent an instruction manual from God. So, we can't conclude that suicide is obviously wrong, because it violates God's will. Well, unless you've got an instruction manual. You might think, for example, that the Bible tells us not to commit suicide. And since the Bible is God's word, we must do whatever the Bible tells us. That's a kind of argument that I'm perfectly prepared to engage in. Although, of course, there's a lot of assumptions behind that argument that we would need to really examine. Is there a God?--Well, obviously we needed that weighing in for the God's will argument.--Has God expressed his will in a book? If so, what book is it? Do we have moral reason to obey God? Also relevant for the God's will argument. And of course, if we do think we have an obligation to obey this instruction manual, are we really prepared to obey this instruction manual? Even if there's a sentence in this instruction manual that says don't commit suicide, a lot of other things the instruction manual also says that most of us are not inclined to do. The instruction manual says not to eat pork. Well, how many of you are not willing to eat pork? The instruction manual tells you not to mix various kinds of material together in a single item of clothing. How many of you think that that's unacceptable? The instruction manual tells you that if a teenager is rude to their parents, they should be stoned to death. How many of you think that that's a moral requirement? If you're going to pick and choose which bits of the instruction manual you actually think are morally relevant, then you can't come to me and say, "Oh, suicide is wrong because the instruction manual says so." You're not really using the instruction manual to give you moral guidance. You're starting with your moral beliefs and then picking and choosing which bits of the instruction manual you want to accept. Well, that's a big question. That's a big topic. And so having just touched on it, I'm going to have to put it aside. Instruction manual aside, at the very least, we might say, appeal to God's will can't help us to decide whether or not suicide is legitimate or not. But there's a different quick and dirty argument. Also it can be run in a theological form, but it need not be run that way. And that has to do with gratitude. We've been given life and life's pretty amazing. And so we have an obligation, a debt of gratitude to keep the gift. Now look, gratitude is not one of the moral virtues that gets a lot of discussion nowadays. It's fallen on rather hard times. But I see no reason to dismiss it. It does seem to me there is such a thing as a debt of gratitude. If someone does you a favor, you owe them something. You owe them a debt of gratitude. And so the argument might then go, look, either God gave us life, or nature gave us life, or our parents gave us life. Whatever it was, we owe a debt of gratitude for this wonderful gift. And as such, how do you repay the debt? You repay the debt by keeping the gift. If you kill yourself, you're rejecting the gift. That's being ungrateful, and ingratitude is immoral. It's wrong. And that's why suicide is wrong. That's the second quick and dirty argument. Perhaps it won't surprise you that I don't find this second argument persuasive either. Not because I'm skeptical about debts of gratitude, but I want us to pay attention to what exactly obligations of gratitude require us to do. In particular, it's important to bear in mind that you owe the person who gives you a gift something only when what he's giving you, or she's giving you, is a gift. Imagine that somebody, I'll call him The Bully, gives you a pie and says, "Eat it." But it's not an apple pie. It's not a cherry pie. It's some gross, disgusting, slime pie, some rotting slime pie, and he cuts out a big piece and he says, "Eat it." Do you owe this person, as a debt of gratitude, out of gratitude, do you owe him the obligation to eat the pie and continue eating the pie? That would seem like a rather odd thing to claim. This guy is indeed, as I've named him, just a bully. Now, of course, typical bullies, at least in the comic book cases, bullies are big and strong. The bully might be able to say the following thing to you. "You eat this pie or I will beat you up. I'll beat the crap out of you." And look, I'm not a very strong guy. He might well be able to do it, and I might not, I might well know he is going to do it. And so, it might be prudent for me to eat the slime pie, disgusting, appalling as it may be. It might be better to have a couple of slices of slime pie than to be beaten up to a pulp. But there's no moral obligation here. There's no moral requirement to eat the pie. Well, if God takes on the role of bully and says, "Eat the pie or I'll send you to hell," maybe it would be prudent of you to do what he says. And if God takes on the role of bully and says, "Even though your life has become so horrible that you'd be better off dead, I insist that you keep living or I'll send you to hell if you kill yourself," maybe it's prudent of you not to kill yourself. But there's no moral requirement here. God's just a bully on this story. Now, that's not to say that I think God is a bully. If you believe, plausibly enough, God is good, then God's not going to want you to continue eating the pie once it's spoiled. He gives you an apple pie. He says, "Eat it. It's good for you. You'll like it." Out of gratitude, you eat it. But then God, not being a bully, says, "If the pie ever spoils, you can stop eating." Why in the world would he insist that we continue to eat a spoiled pie if He's not a bully? So, I can't see how any argument from gratitude is going to get off the ground. If there's something immoral about suicide, we're not going to get the immorality through these quick and dirty arguments. We're going to have to get it from some more systematic appeal to moral theory. And that's what we'll turn to next time. |
YaleCourses_Philosophy_of_Death | 12_Personal_identity_Part_III_Objections_to_the_personality_theory.txt | Professor Shelly Kagan: We've distinguished three different views as to the secret or key to personal identity across time. There's the soul view, the body view, and the personality view. Putting aside, for the most part, the soul view, because I've argued that there are no souls--although occasionally I bring it out just for the sake of comparison and contrast--the main question we want to ask ourselves is how to choose between the body view and the personality view. The body view says follow the body. If somebody around in the future's got my body, that's me. The personality view says follow the personality, that is, the set of beliefs, desires, memories, goals, ambitions, and so forth. Somebody around in the future that's got my memories, my beliefs, my desires, that's me. How should we choose between these two views? As I mentioned at the end of last lecture, what I want to do is offer us a set of thought experiments. They've got to be thought experiments, because in real life, bodies and personalities go hand in hand. But by doing some science fiction experiments, we can take them apart and ask ourselves, "Which one do I think is me? When my body goes one way and my personality goes another way, where do I go?" Again, just to remind you, in order to get the intuitions actually flowing, what I'm going to do once I've separated the body and the personality this way, is torture one of the end products. So I'm going to be asking you to put yourself in the first person. Imagine this is happening to you. And ask yourself, "Which one do I want to be tortured? Or which one do I want to not be tortured?" Because that will give you some kind of evidence as to which one you take to be you. And think about this in that special first person ego-concerned way that comes naturally to us. Just bracket any moral concerns you may have about torturing other people or agreeing that somebody else should be tortured. For our purposes, right now, if I brought up a volunteer from the class and I'm asking you--here's you, there's the other volunteer--which one do you want to be tortured? it's, "Let that one be tortured. Don't let it happen to me." That's how we know this is me speaking. All right, so that's the question I'm going to ask you. I'll probably slip into talking about this experiment as though it's being done to me. But to get it vivid, you should think of it as though it's being done to you. I'll mention, just in passing, that these thought experiments that I'm about to give, I'm going to give a pair of them, come from Bernard Williams, who's a British philosopher. All right, so case number one. Here you are. The mad scientist has kidnapped you and he says: I've been working on mind transfer machines. And what I'm going to do is I've got you and I've also kidnapped somebody else over here, Linda. And I'm going to hook you up to my machines and swap your minds. And what that means is, I'm going to read off the memories and the beliefs and the desires from your brain and read off the memories and desires and beliefs from Linda's brain. And then I'm going to electronically transfer Linda's memories and beliefs and so forth over here and implant them onto this brain. And take your memories and beliefs and so forth and implant them onto Linda's brain. First, we'll put you to sleep when we do all this procedure. Then when you wake up, you will wake up in Linda's body." There'll be something here that, you'll wake up and you'll say, "What am I doing in this new body? What happened to my beard? How come I'm speaking in this high female voice?" Whatever it is, but you'll think to yourself, "Well, here I am, Shelly Kagan. I seem to be inhabiting Linda's body. Don't know how that happened. Oh yes, the mad scientist kidnapped me and he transferred us, he swapped us. He swapped our bodies, swapped our minds. I guess the whole thing works." So the mad scientist explains all of this to you, but in order to give it a little kicker, because he's also an evil mad scientist--that may be evil already, but because he's an evil mad scientist--he says, "And then when I'm done--" So over here we've got Shelly's body but Linda's personality. So Linda thinking "What am I doing?" and "What am I doing in Shelly's body? How did I get a beard?" So over here, Linda, in Shelly's body. Over here, Shelly, in Linda's body. "I'm going to torture one of these. But because I'm a generous evil mad scientist, I want to ask you which one should I torture?" Now, when I think about this, and again, I'm inviting you to think about this in the first person, so this is happening to you. When I think about this, I say "Torture the one over here." I'm going to be over here in Linda's body, horrified at what's been going on, horrified that she's being tortured, but at least it's not happening to me. That's the intuition I've got when I think about this case. When the mad scientist asks me, "Which one of these two should I torture?" I say, "Torture this one." Because if I were to say "Torture this one" and then he does it, think about what's going to happen. I'll be thinking "I'm Shelly Kagan. Oh, this is what a horrible situation. Oh, the pain, the pain! Stop the pain! Make it go away!" I don't want that to happen to me. If this one's being tortured, nobody's thinking to himself, "Oh, I'm Shelly Kagan in horrible pain." So I want this one to be tortured. All right, that's the intuition I've got about the case. Now, if you've got that same intuition, think about the implications of that intuition. You're saying that I, Shelly Kagan, ended up over here. But that's not my body. This is Linda's body. Shelly Kagan's old body is over here. But this is the one that's me, because this is the one that I don't want to have tortured. So the body isn't the key to personal identity. Personality is the key to personal identity. This has got my personality, my memories of growing up in Chicago, becoming a philosopher, my thoughts about what I want to have happen to my children, my fears about how I'm going to explain what's going on to my wife. Whatever it is, this is the Shelly Kagan personality over here. This is me. That follows then that this intuition suggests that what I find intuitively plausible is the personality theory of personal identity. Now, let's tell a different story. Both of these stories, as I say, come from Bernard Williams. Bernard Williams says here's another example we can think about. Mad scientist, again, kidnaps you, kidnaps Linda. And he says, "Shelly, I've got some news for you." I'm switching between you and me. He says, "Shelly, I've got some news for you. I'm going to torture you." I say, "No, no! Please don't do it to me! Please, please, don't torture me!" He says, "Well, you know, I'm in the mad scientist business. This is what I do. I'm going to torture you." He says, "But because I'm a generous mad scientist, before I torture you, what I'm going to do is give you amnesia. I'm going to completely scrub clean your brain so that you won't remember that you're Shelly Kagan. You won't have any memories of growing up in Chicago. You won't have any memories of deciding to become a philosopher. You won't remember getting married or having children. You won't remember the--you won't have any desire. The whole thing wiped clean, complete perfect amnesia before I torture you. Don't you feel better?" No, I don't feel better. I'm still going to be tortured and now we've added insult to injury. I've got amnesia as well as being tortured. No comfort there. "Well," he says, "Look, I'll make the deal sweeter for you. After I give you amnesia, before I torture you, I will drive you insane and make you believe that you're Linda. I've been studying Linda. There she is. I've been reading her psychology by looking at her brain waves and so forth and so on. And so I'm going to delude you into thinking that you're Linda. I'm going to make you think ‘Oh, I'm Linda.'" You won't talk like that. "Oh, I'm Linda." "And you'll have the memories of Linda growing up in Pennsylvania and you'll remember Linda's family and, like Linda, you'll want to be an author, or whatever it is that Linda wants to be. And then I'll torture you. Are you happy now?" No, I'm not happy now. First of all, I'm being tortured. I was given amnesia. And now you've driven me crazy and make me--deluded me into thinking that I'm Linda. No comfort there. He says, "Okay, last attempt to make--you're not being very reasonable," he says. "Last attempt, I'm going to, after I drive you crazy and make you think you're Linda, I'm going to do the corresponding thing for Linda. I'm going to give her amnesia and then I'm going to drive her crazy and make her think that she's Shelly. Give her all of your memories and beliefs and desires. Now is it okay that I'm going to torture you?" No. It hardly makes it--it was bad enough I was being tortured and given amnesia and driven insane. It doesn't really make it any better that you're also going to give amnesia and drive insane somebody else. Don't torture me! If you've got to torture somebody, I say in my nonethical mood, if you've got to torture somebody, do it to her. Don't do it to me. When I think about this second case, that's my intuition. Now, think about the implications of this second case for the theory of personal identity. If I don't want this thing over here to be tortured, that must be because I think it's me. But if it's me, what's the key to personal identity? Well, not personality, because after all, this doesn't end up with Shelly Kagan's personality before the torture. Shelly Kagan's personality is over there. This is Shelly Kagan's body and that suggests if I don't want this to be tortured, it's because I believe in the body theory of personal identity. Follow the body, not follow the personality. Even though he swapped our personalities, it's still me he's torturing. That's the intuition I've got when I think about Bernard Williams' second case. Now, we're in a bit of a pickle here, from the philosophical point of view. Because when we've thought about the first case, the intuition seemed to be, ah, personality's the key to personal identity. But when we thought about the second case, the intuition seems to be, huh, body is the key to personal identity. That's bad, right? Two different cases give us two different, diametrically opposed, answers on the very same question. One sec. And it's worse still--Of course, if you don't share the intuitions that I just--I was being honest with you. Those really are my intuitions when I think about these cases. If you're with me, you're in a philosophical problem. If you're not with me, if you didn't have the same intuitions, then maybe you don't have a problem. But I've got a problem. And it's worse still because it's not really, if we're careful and think about it, it's not really as though we have two different cases and intuitively we want to give different answers to those two different cases. Really, all we've got there is just one case. It's the very same case, the very same story, that I told two different times. In both cases, before the torturing ends, goes in, there's Shelly Kagan's body over here with Linda's personality and there's Linda's body over here with Shelly Kagan's personality. And we're asking, "Which one do you want to be tortured?" It's the very same setup. I just emphasized different elements in a way to manipulate your intuitions. But it's the very same case. It can't be that in one of them, follow the body and the other one, follow personality. So it's very hard to know what moral should we draw. The appeal to intuition, thinking about these cases doesn't seem to take us very far. There's a question back there. Yeah? Student: [inaudible] Professor Shelly Kagan: Nice suggestion. So the suggestion was this. When the mad scientist put my personality, Shelly Kagan personality, onto Linda's body, he had to modify Linda's brain. And in modifying Linda's brain--this was the question that was just raised--hasn't he actually made that brain more like Shelly Kagan's brain than Linda's brain? And if that's right, shouldn't we say--Remember, the best form of the body view, I argued previously, was the brain version. So if this is really Shelly Kagan's brain over here, then this isn't a problem for the body view. We were deceived when we said the body view said this is Shelly Kagan. Really, the body view, to wit, the best version of the body view, that is the brain version, now has to say, "Oh, we moved Shelly's brain and put it here." Well, if you're prepared to say that, then indeed you will be able to say, yeah, it's the body view. The body view says, "Do it to this one." Rather, "Don't do it to this one, because this is Shelly Kagan." I don't actually find myself though inclined to agree with you that this has become Shelly Kagan's brain. If you ask me, "Where's Shelly Kagan's legs?" They're still here. "Where's Shelly Kagan's heart?" It's still here. "Where's Shelly Kagan's brain?" It's still here. It's not as though what the scientist did was open up my skull, take the brain out. At least, if that's the way we're imagining it, don't imagine that! This is all electronic transfer. It's not as though he took the brain out and literally moved that hunk of tissue over here. All he's done is reprogram Linda's brain. Analogy here that might be helpful. Think of the difference between the computer and the programs and files saved on the computer. Personality is a little bit like a program that's running on the computer. Though we have to have not just the generic program, but the specific data files and databases and so forth. What the mad scientist did, in effect, was wipe out the hard drive, put in the other programs from the Shelly Kagan computer, but it's still the very same computer. It's still the same central processing unit, or so it seems to me. Of course, it's true that now, in a certain way, Linda's brain will be similar to the way that Shelly Kagan's brain had been before. In terms of how, as it were, the floppy drives are set up. But still where's, literally speaking, Shelly Kagan's brain? I want to say it's over there, not over here. There was another question or comment. Yeah. Student: [inaudible] Professor Shelly Kagan: I'm not quite sure what the question is. So the thought is, look, over here we've got Shelly Kagan's body with Linda's personality. If we torture this one, this thing, whoever it is, is going to think to itself, "I'm Linda. I'm in horrible pain. I wish it would stop. I wonder whether I'll ever see Linda's husband again." Over here we've got Linda's body, Shelly Kagan's personality. If we torture this one--of course to torture, you cause pain to bodies, but the pain gets felt in the mind. So over here, we've got something that's going to think to itself, "I'm Shelly Kagan. I'm in horrible pain. I wonder whether I'll ever see Shelly Kagan's wife again." Yes, of course, we're torturing bodies. By torturing the bodies we cause pain to the minds, the personalities, who have beliefs about who's hurting. What I'm inviting you to think about is which one, if you had to choose between these two gruesome scenarios, which one would you rather save? Which one would you rather protect? Which do you care more about? Making sure that your lump of flesh doesn't have its neurons hurt? Or making sure that the person who's thinking to yourself, "I'm Shelly Kagan" or whatever your name is, that "I'm in pain." You don't want to be thinking, "I, Shelly Kagan, am in pain." Or if your name is Mary, "I'm Mary. I'm in pain." That's what we're trying to get straight on here. The trouble though is that you tell the very same story two different times and I find myself sometimes being pulled this way, sometimes being pulled that way. So I can't use thinking about the Williams cases as a method of deciding what do I really believe, the body view or the personality view? I find, myself, you spin the story one way and I follow the body. You spin the story another way, and I follow the personality. If we're going to have a way to decide between these two theories, it seems as though we need some other kind of arguments. At least, I need some other kind of arguments, because of the intuitions I've got about the cases. So let me turn to a different approach to solving the question, answering the question, which one should we believe? It starts by raising a certain philosophical objection to the personality theory. It's going to say, look, the personality theory of personal identity has an implication that we cannot possibly accept. So we have to reject the view. And then become body theorists, if there are no souls. Here's the objection. It's a common enough objection. It's probably occurred to some of you. According to the personality theory, whether somebody is me depends on whether he's got my beliefs. For example, the belief that I'm Shelly Kagan, professor of philosophy at Yale University. Well, I'm a not an especially interesting fellow. So let's make it more dramatic and think about Napoleon. You've probably read about this thing. Every now and then there are some crazy people who think they're Napoleon. So imagine that there's right now somebody in an insane asylum in Michigan who's got the thought, "I am Napoleon." Well, the objection says, clearly this guy's just insane, right? He is not Napoleon. He's David Smith who grew up in Detroit or whatever. He just insanely believes he's Napoleon. Yet, the personality theory, the objection says, would tell us that he is Napoleon because he's got the beliefs of Napoleon. He's got Napoleon's personality. Since that's obviously the wrong thing to say about the case, we should reject the personality view. But not so quick. The personality view doesn't say anybody who has any elements at all of my personality is me. One belief in common is obviously not enough. Look, we all believe the earth is round. That's not enough to make somebody else me. Of course, the belief, "I am Napoleon" is a much rarer belief. I presume that none of you have that belief. I certainly don't have that belief. Napoleon had it and David Smith in Michigan's got it. But so what? One belief, even one very unusual belief's not enough to make somebody Napoleon, according to the personality theory. To be Napoleon, you've got to have the very same overall personality, which is a very big, complicated set of beliefs and desires and ambitions and memories. David Smith doesn't have that. David Smith in the insane asylum in Michigan does not remember conquering Europe. He doesn't remember being crowned emperor. He doesn't remember being defeated by the British. He doesn't have any of those memories. He probably doesn't even speak French. Napoleon spoke French. He doesn't have Napoleon's personality. So the David Smith case isn't really bothersome. It's not really a counterexample to the personality theory. The personality theory says, to be Napoleon, you've got to have Napoleon's personality. But David Smith doesn't. So, of course, we can all agree David Smith, despite thinking he's Napoleon, is not Napoleon. No problem here for the personality theory. But we could tweak the case. We could revise the case. Some foe of the personality theory could say, "Okay, imagine that this guy in Michigan does have Napoleon's personality. He's got the memories of being crowned emperor and being defeated, conquering Europe. He's got all of those memories." And, remember we want him to have Napoleon's personality. He doesn't have any David Smith memories. He doesn't have any memories of growing up in Detroit. How could Napoleon have memories of growing up in Detroit? Napoleon grew up in France. The objection then says even if this guy had all of Napoleon's memories, beliefs, desires, personality, still wouldn't be Napoleon. So the personality theory's got to go. Well, when I think about this example, I think, now we've got it right. That is, that is what the personality theory has to say about that case. But I'm not so confident anymore that it's the wrong thing to say. So think of this, as it were, from the point of view of Napoleon, right? So there was Napoleon in the 1800s conquering Europe and being crowned emperor, being defeated by the British, being sent to exile on, was it Elba, right? And I forget where Napoleon actually dies, but he's got memories of getting sick and ill and the light begins to fade and he goes unconscious. And then--well, we'll at least try to describe it this way--he wakes up. And he wakes up in Michigan. And he thinks to himself, "Hallo. Je suis, Napoleon! What am I doing in Michigan?" I don't speak French, so I'm going to drop that, right? "But the last thing I remember I was going to bed from my fatal illness on the Isle of Elba. How did I get over here? I wonder if there's any chance of reassembling my army and reconquering the world." If he had all of that, it's not so clear to me that it would be the wrong thing to say that, by golly, this is Napoleon. I mean, it would be totally bizarre. Things like this don't happen. But of course, we're doing science fiction stories here. So we'd say to ourselves, wouldn't we, somehow Napoleon has been reborn or reincarnated, taking over, by some sort of process of possession, the body of the former David Smith, but now it's Napoleon. I find myself thinking maybe that would be the right thing to say. Yeah? Student: [inaudible] Professor Shelly Kagan: All right, so the thought was, look, this guy over here, David Smith's body with Napoleon's personality--And let's be clear about this. There's no underlying David Smith personality still there, to have the counterexample or the example that we're after. It can't be that he's got mixed together memories of growing up in France and memories of growing up in Detroit. He never thinks to himself, "I'm David Smith. How did I become Napoleon?" If you got that junk, you don't have Napoleon's personality. He's just got Napoleon's personality through and through. Well, the question then was, maybe that's not so. After all, he doesn't really have Napoleon's experiences, did he? Napoleon had the experience of being crowned emperor. But this guy didn't have the experience of being crowned emperor. Maybe what we should say is he thinks he remembers the experience of being crowned emperor, but it's a fake memory. It's an illusion, or a delusion, but he didn't really have the genuine memory. To have the genuine memory, he has to have been crowned emperor. And he wasn't crowned emperor; Napoleon was crowned emperor. Well, that's what we could say, but we can't say that until we decide he's not Napoleon. After all, if the personality theory is right, since he does have all of these memories, or semi-memories, or quasi-memories, or whatever we should call them. If that's the key, then it is Napoleon. So he is remembering being crowned emperor. If you want to say, no, no, no, those memories are illusions, it must be because you don't think he's really Napoleon. In which case, what you're discovering is you don't really believe the personality theory. Why isn't he Napoleon? It's not his body. The body of Napoleon is not this body and to be Napoleon, you've got to have Napoleon's body. It's a possible position. That's the thought that the body theorists are trying to elicit in you when they offer these Napoleonesque counterexamples. You could match the personality as much as you want, but it's still not Napoleon. Don't you agree? That's what they say. And if you do agree, that shows you don't really accept the personality theory. I'm not going to try to settle this here. Who should we believe? The personality theory or the body theory? I'm trying to invite you to think about the implications and the differences between these views so as to get clearer in your own mind about which of these you accept. In many moods--at least, when I think about not the simple, the ordinary David Smith case with a single belief or two, but the full bodied--that's a bad term--the full blown Napoleon case with all the memories, all the beliefs. Suppose David Smith there thinks, "I remember. I remember." I can't say it in a French accent. "I remember playing as a lad in France burying my little toy saber." Some memory that Napoleon never wrote down in his diaries. And we go and we dig up in France and there is the saber, right? This guy remembers things that Napoleon would remember. I find myself thinking, well, maybe that's Napoleon. Imagine a slightly different version of this case. Napoleon dies on his death bed, wakes up in heaven saying "Je suis Napoleon. I was emperor of Europe and now I have come to my due reward. I am here in heaven." Well, it seems like what we would want to say is, "Yeah, that's Napoleon." It's Napoleon even if it doesn't have Napoleon's body. Napoleon's corpse is rotted in France. God gives Napoleon some new angelic body. It seems straightforward. If it's got Napoleon's memories, beliefs, desires, goals, and so forth and so on, wouldn't we say it's Napoleon? Imagine that--back to this earth--this Napoleon type of case happened all the time. We might have a term for this sort of thing--possession. Every now and then, people's bodies get possessed. They become this other person who's, now, personality has taken over. If this happened frequently enough, instead of just a little science fiction story like with the David Smith case, maybe we'd say, yeah, possession is one of these things that needs to be explained. How is it the personality travels? Well, maybe there'll be some sort of physical explanation for it. Still, maybe we'd say, yeah, the people have been taken over. They've become somebody else. So speaking personally, I don't find the Napoleon objection a telling one. It doesn't give me a reason to reject the personality theory. But we can now tweak the worry in a slightly different way. Okay, so here was Napoleon back in France with his memories and his beliefs and so forth and so on. Death bed, goes to sleep, goes unconscious, whatever it is. I told you a story in which he wakes up, or his personality wakes up, however we should put it, in Michigan. But if it could happen in Michigan, I suppose it could also happen in New York. And if it could happen in New York and it could happen in Michigan, I suppose it could happen in New York and Michigan. So right now, let's imagine two people with Napoleon's personalities, complete personalities, one of them in Michigan, one of them in New York. Whoa. What should we say now? What is the personality theory going to say about this case? So I don't know how to draw personalities very well on the board, so I'll draw little stick figures of bodies, but I mean these to be the personalities. So here we've got the continuing, evolving over time--this is all taking place in France--the personality of Napoleon in France. There's the deathbed scene. Now, up here we had somebody with Napoleon's personality continuing. Of course, he's going to change. He's going to evolve. Just like the actual historical Napoleon kept having new beliefs and new desires, if this really was Napoleon in Michigan, he'll start having some new desires and beliefs about Michigan, which perhaps Napoleon never gave any thought to at all. Who knows? So this is Michigan over here. And I said I was willing to entertain the possibility that this is all Napoleon. Napoleon, if you think of it, Napoleon's a person extended through space and time. According to the personality view, what makes somebody in the future the same person as somebody in the past is if it's part of the same ongoing personality. So maybe that's what we've got going on in the Michigan case. Now, we imagine in our new version of the worry somebody with Napoleon's personality over here in New York. Now, if the Michigan guy hadn't been there, what I would have done, if I believed in the personality theory or when I believe in the personality theory, is say "Oh look, Napoleon--reincarnated in New York." That's what the personality theory should say and I said it doesn't seem like a crazy thing to say if we only had the guy in New York. Just like it wasn't a crazy thing to say if we only had the guy in Michigan. The trouble is, imagine the case where we've got one guy who's got all of Napoleon's personality in Michigan, one guy who's got all of Napoleon's personality in New York. Now what should we say? What are the choices here? Well, I suppose one possibility would be to say the guy in New York is Napoleon. The guy in Michigan isn't. He's just an insane guy who's got Napoleon's personality. You could say that. The reason that it seems difficult to say though is because it seems like it would be just as plausible to say the reverse. Say no, no, no. It's not the New York fellow who's Napoleon. It's the Michigan fellow who's Napoleon. Well, we could say that, but the difficulty is there seems to be no good reason to favor the Michigan fellow over the New York fellow. Just like there was no good reason to favor the New York fellow over the Michigan fellow. Saying that one of them is Napoleon and the other one isn't seems very hard to believe. Well then, what's the alternative? Well, I suppose another possibility is to say, at least another possibility worth mentioning, is to say they're both Napoleon. Somehow, bizarrely enough, Napoleon split into two. But when splitting into two, he split on to two bodies, but they are both Napoleon. Now, it's very important to understand how bizarre this proposal would be. The claim is not now we've got two Napoleons who are, of course, not identical to each other. No, no, we've got a single Napoleon. A Napoleon who was in one place in France and is now simultaneously in two places in the U.S. That seems very hard to believe. It seems to just violate one of our fundamental notions about how people work, metaphysically speaking. People can't be in two places at the same time. Well, maybe that metahphysical claim I just made should be abandoned. Maybe we should say, oh, under normal circumstances, people can't be in two places at the same time. But if you had something like this, by golly, this guy would be, Michigan dude is Napoleon and he's the very same person, the very same person as New York dude. New York dude and Michigan dude are a single person, Napoleon, who is bilocated. It doesn't happen. But if it did happen, it could happen. Well, maybe that's what we should say. But again, all I can tell you is, I find that too big a price to pay. People can't be in two places. It's one thing to say people are space-time worms extended through space and extended through time. It's another thing to say that they are Y-shaped space-time worms. It seems to violate one of the fundamental metaphysical things about how people work. All right, I've got to remind you though, none of the options here are all that attractive. So when I say, you don't want to say that, you don't want to say that, we're going to run out of possibilities. So maybe this is what you'll want to say. All right, saying that Napoleon is in Michigan but not New York doesn't seem very attractive. Saying he's in New York but not Michigan doesn't seem very attractive. Saying he's in both places at the very same time doesn't seem very attractive. But what other possibilities are there? If he's not one but not the other, and if he's not both, the only other possibility is that he's neither. Given this situation, neither of these guys is Napoleon. You've got separate people. There's the person Napoleon, a space-time worm that came to an end in France. And there's some space-time worm taking place in Michigan, some space-time person worm taking place in New York. But neither of them are Napoleon. That seems to me to be the least unattractive of the options we've got available. But notice that if we say this, if we say neither of these guys, despite having Napoleon's personality, neither of these guys is Napoleon, then the personality theory of personal identity is false. It's rejected. We're giving up on it. Because the personality theory, after all, said if you've got Napoleon's personality, you're Napoleon. But now we've got people that are not Napoleon but they've got Napoleon's personality. So the personality theory, follow the personality, is wrong if we say neither of these guys is Napoleon. But that does seem to be the least unacceptable of the options. At least that's how it seems to me. So the personality theory's got to be rejected. Now, I think that's right. I think, in fact, the personality theory's got to be rejected. But that doesn't mean we couldn't revise it. We could try to change it in a way that keeps much of the spirit of the personality theory, but avoids some of the problems we've just been looking at. Here's what I think is the best revision available to fans of the personality theory. They should say we were simplifying unduly. We were simplifying it, getting it wrong, when we said, "Follow the personality. If you've got Napoleon's personality, that's enough to make you Napoleon." That's not true. We need to throw in an extra clause to deal with branching, splitting cases, of the sort that I've just been talking about. We need to say, if there's somebody in the future who's got my personality, that person is me, as long as there's only one person around in the future who's got my personality. If you have multiple examples, duplications, splittings and branchings, nobody, none of them is me. So where the original personality theory said, same personality, that's good enough for being the same person, the new version throws in a no-competitors clause, throws in a no branching clause. It says, same personality's good enough, as long as there's no branching. If there is branching, neither of the branches is me. Now, if we say that, if we throw in the no branching clause, then we're able to say, look, in the original story I was telling, where there was the Michigan guy who had Napoleon's personality but no New York guy, that really would be Napoleon, because it would have the same personality with no competitor. Similarly, had we had somebody with Napoleon's personality in New York and nobody with the personality in Michigan, that guy would have been Napoleon, because we would have had the same personality with no branching, with no competitor. But in the case where we've got branching, where we've got somebody with Napoleon's personality both in Michigan and New York, that violates the no branching rule, and we just have to say nobody's Napoleon in that case. As I say, that seems to me to be the best revision of the personality theory available to them. So what we now need to ask is, can we possibly believe that revision? Can we possibly accept the no branching rule? The no branching rule seems rather bizarre in its own right. Think about the ordinary familiar cases that we're trying to make sense of. I'm the same person as the person that was lecturing to you last time. According to the personality theory or the revised personality theory, that's because I've got the same personality. The guy last time thought he was Shelly Kagan, believed he was professor of philosophy. I think I'm Shelly Kagan. I believe I'm the professor of philosophy. He's got all sorts of memories of his childhood. I've got the same memories. He's got desires about finishing his book. I've got those desires about finishing my book. Same personality, it's me. That's what the personality theory says. So I conclude, hey, it's me. I know you were all worried whether I'd survive over the break a couple days. Came back, it's still me. I made it through Wednesday. Or did I? Or perhaps I should ask, "Or did he?" Yeah, there was somebody there on Tuesday and yeah, there's somebody here on Thursday, and yeah, this person here now has got the same personality as the guy who was there on Tuesday. But according to the no branching rule, we can't yet conclude that I'm the same person as the person that was lecturing to you on Tuesday. We can't conclude that until we know that there aren't any competitors, that there isn't anybody else right now who also has the same personality. If I'm the only one around today who's got Shelly Kagan's personality, then I'm the same person as the person who was lecturing to you on Tuesday. But if, unbeknownst to me, and I presume unbeknownst to you, there's somebody in Michigan right now who's got Shelly Kagan's personality, then we have to say, huh, it turns out I'm not Shelly Kagan after all. Neither is he. Neither of us are Shelly Kagan. Shelly Kagan died. So am I Shelly Kagan or am I not Shelly Kagan? Can't tell until we know what's going on in Michigan. Whoa! That seems very, very hard to believe. Whether I am the same person as the person who was lecturing to you on Tuesday presumably should turn on facts about that guy who was lecturing to you on Tuesday, and facts about this guy who's lecturing to you today on Thursday, and maybe some facts about the relationship between that guy and this guy--or that stage and this stage, if we prefer to talk about it that way. We can see how whether it's the same person or not has to turn on the relations between the stages. But how could it possibly turn on what's happening in Michigan? How can whether or not I am the same guy as the guy who was lecturing to you on Tuesday depend on what's happening in Pennsylvania or Australia or Mars? To use some philosophical jargon, the nature of identity seems like it should depend only on intrinsic facts about me or perhaps relational facts about the relations between my stages. But it shouldn't depend on extrinsic, external facts about what's happening someplace else. But if we accept the no branching rule, we're saying whether or not we've got identity depends on what's happening elsewhere. With the no branching rule, identity ceases to be a strictly internal affair. It becomes, in part, an external affair. That's very, very hard to believe. And if you're not prepared to believe it, it looks as though you've got to give up on the personality view. Last thought. During all of these problems for the personality theorist, the body theorist, the fans of the body theorist, is standing there laughing. "Ha! You poor fools. Look at all the problems you've got adopting the personality theory. See how easy it is to duplicate personalities, leading to these totally implausible no branching rules. We can avoid all of that if we become body theorists." What we'll ask ourselves next time is whether or not the body theorist is in a better situation. |
YaleCourses_Philosophy_of_Death | 13_Personal_identity_Part_IV_What_matters.txt | Professor Shelly Kagan: Let me start by reviewing the problem that we were considering last week. We were raising a difficulty for the personality theory of personal identity according to which the key to being the same person is having the very same ongoing, evolving personality. And the difficulty was basically the problem of duplication. That it seemed as though we could have more than one--call it an individual--more than one body, that had the very same set of memories, beliefs and so forth. And that we have to ask ourselves, "Well, what should the personality theory say about a case like that?" So imagine that over the weekend, the mad scientist copied my memories, beliefs, desires, fears, ambitions, goals, intentions and imprinted that on somebody else's brain. They did it last night at midnight. This morning, we woke up. And we have to ask ourselves, "Who's Shelly Kagan? Who's the person that was lecturing to you last week?" Well, it doesn't seem plausible in terms of the personality theory to say that he's Shelly Kagan, and the one here today--Suppose the other one's in Michigan. If the one in Michigan's Shelly Kagan but this one's not--After all, although it's true that he's got Shelly Kagan's memories, he woke up thinking he was Shelly Kagan, just like I woke up thinking I was Shelly Kagan. He woke up thinking about what he was going to lecture on in class today, just like I woke up thinking about what I was going to lecture in class today. He remembered last week's lecture just like I remembered last week's lecture. Well, no clear reason to say--for the personality theory to say--that he's Shelly Kagan and I'm not. After all, I've got the very same set of memories, beliefs, desires that he has. But equally true, and more surprisingly, from the personality theory point of view, there's no reason to say that I'm Shelly Kagan and he's not. After all, he's got all the same memories, beliefs and desires that I do. It doesn't seem plausible to say we're both Shelly Kagan, because now we'd have to then say Shelly Kagan's in two places at the same time. So the only alternative seems to be to say that neither of us is Shelly Kagan. But if neither of us is Shelly Kagan, then the simple original personality theory was false. Because according to that theory, having the personality is what it took to be Shelly Kagan. We both have it, yet neither of us is Shelly Kagan. The personality theory must be false. So we revise the personality theory to say, the secret to personal identity is having the same personality--provided that there's no branching. Provided there's no splitting. Provided there's only one best competitor, not two equally good candidates. Given the no branching view, the no branching rule, we can say, in the ordinary case, look, there really wasn't anybody imprinted with my memories and desires in Michigan. I'm the only one around in the earth right now with Shelly Kagan's memories and desires. Since there's no competitor, and I've got the personality, I'm Shelly Kagan. I'm the very same person that was here lecturing to you last week. That's what the personality theory--It gives us the answer we're looking for in the ordinary case. But in the science fiction story where there's a duplicate, it says, uh, if there's branching, the no branching rule comes in. Neither of them is Shelly Kagan. All right, so that's the best way for the personality theory to get revised to deal with this problem. The trouble was, it seems the no branching rule seems very counter-intuitive. So think about it. Here, right now I'm standing in front of you saying I'm Shelly Kagan, the guy who was lecturing to you last week. I believe I'm Shelly Kagan, the guy who was lecturing to you last week. Am I Shelly Kagan? Well, I've got Shelly Kagan's personality. So far so good. Now all we have to decide is, was the branching rule satisfied or violated? So all we have to know is, is there somebody else somewhere in the universe who's got all my memories and beliefs and desires? Well, how in the world could I know that? Whether I, this person talking to you right now, is Shelly Kagan depends on whether there's some duplicate with all my memories in Michigan or not? It seems very counter-intuitive. So although the personality with no branching rule avoids the problem of what to say about duplicates, by saying when there's branching, neither of them is Shelly Kagan, the branching rule itself seems very counter-intuitive. We feel as though whether somebody is me or not should depend upon internal facts about me in the earlier stages or this stage and that stage, not about what's happening elsewhere, outside, extrinsic to these things. So, if you're not willing to accept the no branching rule, if it strikes you as a bizarre thing to throw in to personal identity, maybe you need to reject the personality theory. Now during all of this, the fans of the body view typically are laughing. They say this just goes to show what a dumb theory the personality theory is. The whole problem with the personality theory is that personality is a bit like a software. It's like programs. It's the various programs you run on your computer along with the various data files that you have saved on your hard drive, and so forth. And those can be duplicated. You have copy after copy after copy. You can have two copies of my personality. You could have 100 or 1,000. The problem with--what drove the personality theory into the no branching rule, implausible as it may be, was the fact that your personality is like software, and it can be copied. That's why, they say, we should believe in the body view. If we accept the body view, we avoid the duplication problem. Because, unlike software, which can be literally copied, as many as you want identical, the body can't split. Human bodies can't divide or branch. There's no way that there's another body, that the body on Thursday became two bodies. The body that was here on Thursday didn't, couldn't become two bodies. So we avoid all the problem. That's at least the kind of claim that fans of the body view often make in the face of this difficulty for the personality theory. Well, now we need to ask, is it really true? Is it really true that bodies don't face a duplication problem? Is it really true that human bodies don't and couldn't split? Look, the crucial word here is, of course, "couldn't." Personalities don't actually split either, right? Although I've been giving science fiction examples in which the mad scientist duplicates my memories and beliefs and desires, they've all been science fiction examples. If I can use science fiction to talk about the possibility of splitting, and use that against the personality theory, I'm entitled to use science fiction examples to talk about the possibility of bodies splitting, and ask, what kind of problem that would raise for the body theory? Now, we are familiar with some low-level examples of bodies splitting. Amoebas split, right? You've got a single amoeba. It's going along. At a certain point--Let's draw our amoeba splitting, right? You've got an amoeba split, going along. At a certain point, it starts to look like that. Then it looks like that. And then boom! You've got, well, it splits. There's nothing in biology per se that rules out cell division. Indeed, on the contrary, right? We know cells can split. Now, human bodies, unlike amoebas, don't do that. But maybe there's nothing in biology that rules out the possibility. Suppose we open up the Yale Daily tomorrow and we see that the Yale Center for Amoebic Studies has made this tremendous breakthrough and has discovered how to, through the right kind of injection or whatever, cause a human body to replicate and split in an amoeba-like fashion. Well, then we have to face the problem of what to say in this case of bodily branching. Well, instead of pursuing that example, let me give you a slightly different example that's been discussed a fair bit in the philosophical literature. This is actually a case that one of the students in the class asked about, I think it was last week, if it wasn't even earlier. And I said, "Great question. Let's come back to it." So here, at long last, I'm making good on my promissory note. I'm going to come back to the example that was raised before. You'll recall that when we talked about the body view, I said the best version of the body view doesn't require the entire body, to be the same body, but the brain. Follow the brain. And indeed, it doesn't seem as though we have to require the entire brain, just enough of the brain, however much that turns out, to house personality, memories and so forth. And then, I said, suppose it was possible that one hemisphere of your brain is enough. If there's enough redundancy in the brain so that even if your right hemisphere got destroyed, your left hemisphere, you still have all the same memories, desires, beliefs. Good enough. So now we worry about the following case. So I gave you a bunch of examples, right, where there are brains being transplanted into torsos of others. So suppose, gruesome as it was, this weekend I'm in some horrible accident and my torso gets destroyed and they keep my brain on life support, oxygenating it just long enough to do some radical surgery into some spare torsos. Where'd the torsos come from? Well, you had some living people, but they had very rare brain diseases and their brain suddenly liquefied. So now we've got some spare torsos. All right, so here we've got Shelly Kagan. His body gets destroyed. And here's my brain. Over here we've got Jones' torso. And over here we've got Smith's torso. Suppose we take, call this one the left hemisphere, and we stick it in here, into Jones' torso. We take this other hemisphere, the right half of my brain and we stick it into Smith's torso. We connect all the wires, all the neurons. The operation's a smashing success. Both things wake up. So here's Jones' torso with the left half of SK's brain. Smith's torso with the right half of SK's brain. They wake up. We need some way to refer to these people, so we can start talking about who they are. Let me just call this top one--Jones' torso with the left half of Shelly Kagan's brain--let's call him Lefty. Smith's torso with the right half of Shelly Kagan's brain, let's call him Righty. Okay, operation's a success. Lefty and Righty both wake up. They both think they're Shelly Kagan, and so forth and so on. And we ask ourselves, according to the body view, which one is Shelly Kagan? What are the possibilities? We could say Lefty is Shelly Kagan and Righty is not. Righty's an imposter. But there's nothing in the body view to give us a reason to make that choice. It's true that Lefty's got half of Shelly Kagan's brain and that's good enough. But it's also true that Righty's got half of Shelly Kagan's brain and that seems good enough. So there's no reason to say that Lefty is Shelly Kagan and Righty isn't. And similarly, of course, there's nothing in the body view to make us say that Righty is Shelly Kagan and Lefty isn't. Well, if it's not one, and not the other, what are the remaining possibilities? We could, I suppose, try to say they're both Shelly Kagan. And so Shelly Kagan continues, that is to say his body continues, that is to say his brain continues, that is to say enough of his brain continues, merrily on its way, except now in two places. And so from now on, Shelly Kagan, that single person, is in two different places at the same time. Lefty goes to California. Righty moves to Vermont. From now on, Shelly Kagan's bicoastal. It doesn't seem right. So what else can the body theory say? Well, the body theory could say neither of them are Shelly Kagan. Shelly Kagan died in that gruesome, horrible accident. Although it's true that we now have two people, Lefty and Righty, each of whom has half of Shelly Kagan's brain, and all of Shelly Kagan's memories, for whatever that's worth, neither of them is Shelly Kagan. We could say that as well. But if we--and that seems the least unpalatable of the alternatives. But if we say that, then we've given up on the body view. Because the body view, after all, said to be Shelly Kagan is to have enough of Shelly Kagan's brain. And in this case, both of Lefty and Righty seem to have enough of Shelly Kagan's brain. What's the body theorist to do? As far as I can see, the best option for the body theorist at this point is to add--no surprises here--a no branching rule. The body theorist should say, "The key to personal identity is having the same body, to wit, the same brain, to wit, enough of the brain to keep the personality going--provided that there's no branching, no splitting, no perfect competitors, only one." If the body view adds the no branching principle, then we can say, look, in the case of this sort of splitting--This example is known in theto philosophical literature as fission, like nuclear fission when a big atom splits into two. So, in the fission case, the body says, the body theorist says, in the fission case, there's splitting, there's branching. So neither of them is going to end up being Shelly Kagan. But in the ordinary humdrum case, here I am, my body. Why am I Shelly Kagan? Because the brain in front of you--you can't see it, but it's in front of you--the brain in front of you is the very same brain as the brain that you had in front of you on Thursday. Follow the body, in particular follow the brain. So in the ordinary case, no splitting, follow the brain. In the special case where there's splitting, even if you follow the brain, not good enough. So the body theorist can avoid the problem of fission, avoid the problem of duplication by adding the no branching rule. But of course, the no branching rule didn't seem very intuitive. Whether or not I'm Shelly Kagan, the guy that was lecturing to you on Thursday, depends on whether, unbeknownst to me, over the weekend, somebody removed half of my brain, stuck it in some other torso, sealed me all back up. How could that matter? Well, if you don't find the no branching rule plausible, you're in trouble as a body theorist. In fact, so what we see is, the body theory is in exactly the same problem, exactly the same situation, as the personality theory. Indeed, the fission example is a very nice case of how you could have splitting for the personality theory. Here, before the accident, was Shelly Kagan, somebody who had my beliefs, desires, memories, goals, and so forth. After the accident, we've got two people, Lefty and Righty, or two entities, Lefty and Righty, both of whom have Shelly Kagan's memories, beliefs, desires, goals, and so forth. Splitting the brain shows how you could, in fact, have splitting of personality. So the very same case raises the very same problem for both the body view and the personality view. And the only solution that I can see, at least the best solution that I can see, is to accept the no branching rule. If you don't like the no branching rule, it's not clear what your alternatives are. Or at least, it is clear what your alternatives are; it's not clear which alternative would be any better. Now during all of this--problems for the personality theory, problems for the body view--during all of this, the soul theorist is having a field day. The soul theorist is saying, "Look you guys, you got into all this trouble with splitting and so forth and so on, and needing to add the no branching rule, silly and implausible as that seems, you got into all that trouble because of the problem of splitting because personalities can be split, bodies can be split. If only you had seen the light and stuck to the soul theory of personal identity, all these problems could be avoided." Now, as you know, I don't believe in souls. But forget that issue for the moment. Let's just ask the question, "Is it true that the soul theory--if only there were souls--is it true that the soul theory would at least have the following advantage? It avoids these problems of duplication and fission." Well let's ask. What should a soul theorist say about the fission case? So here's the gruesome accident. My brain gets split apart. One part gets put into Jones' torso. One part gets put into Smith's torso. After the operation, Lefty wakes up thinking he's Shelly Kagan. Smith wakes up thinking he's Shelly Kagan. Lefty's got part of Shelly Kagan's brain. Smith's, or rather Righty's got part of Shelly Kagan's brain. What should the soul theorist say about the case of fission? Well, again, remember, the soul theory says the key to being the same person is having the soul. Why am I the person that was lecturing to you on Thursday? Because it's the very same soul animating my body, or what have you. So, what does the soul theorist say about the fission case? I'm not quite sure, because we have to turn to a metaphysical question that we've touched upon before, namely, can souls split? After all, the problem that fission raises for the personality theory, in a nutshell, is that personalities can split, they can branch. The problem for the body view that fission raises, in a nutshell, is that bodies can split. They can branch. We need to ask about the metaphysics of the soul, can souls split? And I don't know the answer to that, of course. So let's consider both possibilities. Possibility number one. Souls, just like bodies, just like personalities, can split. Suppose that's what happened. So, there was a single soul here, Shelly Kagan's soul, but in the middle of this gruesome operation, gruesome accident and followed by this amazing operation, Shelly Kagan's soul split. So there's one of the SK souls over here and there's one of the SK souls in the other case as well. Each one of Lefty and Righty has one of the pieces of the split Shelly Kagan soul. All right, so now we ask ourselves, "According to the soul theory, which one is Shelly Kagan?" Well, you--By this point, you can run through all the possibilities yourself, right? We could say, well, it's Lefty and not Righty. But there's nothing in the soul theory that supports that claim. They each have an equally good--however good it may be--they've got an equally good piece of the original Shelly Kagan soul. So there's no reason to say that Lefty is Shelly Kagan and Righty isn't. There's no good reason to say Righty is Shelly Kagan and Lefty isn't. Well, would it be better to say they're both Shelly Kagan, as long as you've got a piece of Shelly Kagan's soul, of the original soul, then you just are Shelly Kagan? In which case, Lefty and Righty are both Shelly Kagan, and Shelly Kagan is now bicoastal, one in California, one in Vermont, one part of him? That doesn't seem very satisfying. What's the alternative? The alternative, it seems, for the soul theorist, is to say, neither of them is Shelly Kagan. Neither of them is Shelly Kagan, then Shelly Kagan died. But how can we say that if we accept the soul theory? They both have pieces of Shelly Kagan's soul. The soul split. Well, maybe what the soul theorist would have to do at this point is accept the--da-ta-da--the no branching rule. "Ah," says the soul theorist, "Follow the soul--unless the soul splits, in which case neither of them is Shelly Kagan." Well, the trouble is, we didn't find the no branching rule very plausible. It seemed counterintuitive. But at this point, you begin to wonder, maybe we just need to learn to live with it. If the personality theory needs the no branching rule, and the body theory needs the no branching rule, and the soul theory needs the no branching rule, maybe we're just stuck with the no branching rule, whether or not we like it. And if we're stuck with it, then of course it's not an objection against any one of the theories that uses it. Well, this is all what we would say as soul theorists if we think souls can split. But we need to consider the possibility that souls can't split. Maybe the soul theorist has an alternative available to it that--available to him that the other theories don't have. Suppose Shelly Kagan's soul cannot split. What does that mean? It means, when my brain gets split, my soul is going to end up in Lefty or in Righty, but not in both. If a soul can't split, you can't end up with pieces of the soul or the remnants of the soul in both. The soul is a unified simple thing. Now, I don't actually know whether it's true that simple things can't split. Metaphysically, I'm not sure whether that's a possibility or not. But let's just suppose--look, Plato argued the soul was simple. He didn't actually convince me of that, but suppose we thought souls are simple, and we think simple things can't split. It would follow, then, that souls can't split. Suppose we accept all that metaphysics. Then the question is just, which one is Shelly Kagan? Well, it depends which one ended up with Shelly Kagan's soul. We can't say, they both have a piece. One of them will have it, the other one won't. And you want to know which one's Shelly Kagan? The one that actually ends up with Shelly Kagan's soul. If Lefty ends up with Shelly Kagan's soul, then Lefty is Shelly Kagan and Righty is an imposter. He thinks he's Shelly Kagan, but he's not, because he doesn't have Shelly Kagan's soul. Lefty has it. If Righty's got Shelly Kagan's soul, then Righty is Shelly Kagan and Lefty is the imposter. Now, looking at the situation from the outside, we might be unable to tell which one is really Shelly Kagan. Because we won't be able to tell, looking at it from the outside, which one really has Shelly Kagan's soul. Although it will be true, whichever one really does have Shelly Kagan's soul is Shelly Kagan. But we don't know which one that is. Interestingly, and somewhat more surprisingly, looking at it from the inside, we won't be able to tell either. Lefty will say, "Give me a break. Of course I'm Shelly Kagan. Of course I've got Shelly Kagan's soul. Of course I'm the one." But Righty will also say, "Give me a break. Of course I'm Shelly Kagan. Of course I've got Shelly Kagan's soul. Of course I'm the one." If souls can't split, one of them is mistaken. But there's no way for them to know which one is the one that's deceived. Now, that may not be a problem that you're unwilling to swallow. As we've seen, all the views here have their difficulties. Maybe that's the difficulty you're prepared to accept. What's the right answer in fission? It depends on who's got Shelly Kagan's soul. No way to tell. But still, that's the answer to the metaphysical question. Question? Student: What happens if neither of these had Shelly Kagan's soul? Professor Shelly Kagan: The question was, "What if neither of these have Shelly Kagan's soul?" Then they're both imposters. That's a little bit like the case we worried about when we started thinking about the soul view, right? What if last night God destroyed my soul and put in a new soul? Then Shelly Kagan died. If Shelly Kagan's soul does not migrate to Lefty or Righty, neither of them is Shelly Kagan, according to the soul theory. What happened to Shelly Kagan? Well, if the soul got destroyed, Shelly Kagan died. If the soul didn't get destroyed, maybe somebody else that we weren't even looking at is Shelly Kagan. So as I say, the soul theory can at least give us an answer that avoids the no branching rule. If souls are simples and simples can't split, there's no possibility of having two things with a relevant soul. So we don't need to add, in this ad hoc fashion, the no branching rule. That's an advantage for the soul theory, if only we believed in souls. It is an advantage. But I need to point out that there's another disadvantage that the fission case raises for the soul theory. So let's just suppose that metaphysically God tells us that it's Lefty that has Shelly Kagan's soul. Then of course it's Lefty that is Shelly Kagan. Righty is an imposter. Righty believes he's Shelly Kagan, he has all the memories of Shelly Kagan, all the desires of Shelly Kagan, but he's not Shelly Kagan because he doesn't have Shelly Kagan's soul. Lefty happens to have it. That's a nice answer to the problem of fission, but notice the problem it raises for the argument for believing in a soul in the first place. Way back at the start of the semester when we asked, "Why believe in souls?" one important argument was, or really family of arguments was, you need to believe in souls in order to explain why bodies are animated, why people are rational, how they can have personalities, how they can be creative, and so forth. In order to explain consciousness and self-awareness. Whatever it was, fill in your favorite blank, fill in the blank in your favorite way. The claim was, you needed to believe in souls in order to explain all that. But if that's right, what's going on in Righty's case? Righty is aware. Righty is conscious. Righty is creative. Righty has free will. Righty makes plans. Righty's got personality. Righty is rational. Righty's body is animated. According to the soul-theory argument for soul, rather, according to the argument for souls, you needed to believe in souls in order to explain how you could have a person. But now Righty's a person without a soul, because we just hypothesized, oh, Shelly Kagan's soul's up there. So at the very same moment that positing the nonsplitting of souls seems to solve the fission problem of duplication, it yanks the rug out from underneath the soul theorist by undermining one of the types of arguments for believing in the soul in the first place. After all, if Righty can be a person, admittedly not Shelly Kagan, but a person--conscious, creative, rational, so forth, aware--without a soul, then maybe the same thing is true for us, which is of course what the physicalist says. Let me mention one other possibility, because it's quite intriguing. Suppose the soul theorist answers that last objection by saying, "Ain't ever going to happen." Yeah, it would be a problem for believing in souls if Righty could wake up without one. But since we stipulated that Shelly Kagan's soul is going to end up in Lefty, Righty is not going to wake up. Alternatively, it might have been that Righty woke up, but Lefty doesn't wake up, doesn't survive the operation. Suppose we did these sort of brain transfers all the time and the following thing always happened. Transfer the entire brain, the patient wakes up. Transfer one hemisphere, the patient wakes up. Transfer both hemispheres, one patient or the other wakes up, but never both. If that happened, we'd have a great new argument for the existence of a soul. What could possibly explain why either hemisphere of the brain would normally be enough, as long as we don't transfer both? When we transfer both, one hemisphere might work sometimes, sometimes the other hemisphere, but never both. What could possibly explain that? Souls could explain that. If souls can't split, it can only follow one half of the brain, and that's why we'll get somebody that's got one half, sometimes the other half, but never both halves. So there's a kind of empirical argument for the existence of the soul if we found those kinds of results. Of course, that's a big "if." Please don't go away thinking that what I just said is, here's a new argument for the soul. We don't do brain transfers, let alone have a half-a-brain transfers. We don't have any experiments that suggest one half wakes up, but not the other half. All I'm saying is that if someday we found that, at that point, we'd have an argument for the soul. Well again, let me put away the soul theory again. I was exploring it because it's interesting to think about its implication. But since I don't believe in souls, I want to choose between the body view and the personality view. Both of them, as we saw in the face of fission, needs to accept a no branching rule. If they're going to survive thinking about this case at all, we need to throw in a no branching rule. Whether or not you find the no branching rule hard to believe, if both views are stuck with it, well, then we're stuck with it. So let's try to choose between the personality theory with the no branching rule and the body theory with the no branching rule. Which of these should we accept? Which of these is the better theory of personal identity? Answer, "I'm not sure." Over the course of my philosophical career, I have moved back and forth between them. There was certainly a long period of time in which I found the personality theory, that is, the personality theory with a no branching rule, to be the better and more plausible theory. And it certainly has any number of advocates on the contemporary philosophical scene. But at other times in my philosophical career, I have found the body theory, that is to say, the body theory with the no branching rule, to be the more plausible theory. And it is certainly the case that the body theory has its advocates among contemporary philosophers. For what it's worth--and I don't actually think that what I'm about to say is worth all that much--I'm going to share with you my own pet belief. These days I'm inclined to go with the body theory. I'm inclined to think that the key to personal identity is having the same body, as long as there's no branching, as long as there's no splitting. But it's certainly open to you to decide that you think no, no, the personality theory is the stronger view. I can't settle the question. I don't have any more philosophical arguments up my sleeve on this issue. But I do have another point that's worth considering. Although I'm inclined to think that the body theory may be the best view about what's the key to personal identity, I'm also inclined to think it doesn't really matter. We've been posing the following question. We've been asking, "What does it take for it to be true that I survive?" And it may be that what we should conclude is, whatever the best answer to that question is, it's not the question we should really have been thinking about. We weren't going to be in a position to see that until we went through all the stuff we've been going over for the last couple of weeks. But now that we're here, we're in a position perhaps to raise the question, should we be asking what it takes to survive? Or should we be asking about what matters in survival? Now, in posing this question, I'm obviously presupposing that we can draw a distinction between the question, "Do I survive? Is somebody that exists in the future, whatever, me?" and the question, "What was it that I wanted, when I wanted to survive? What was it that mattered in ordinary survival?" And it might be that these things can actually come apart. To see this, suppose we start by thinking again about the soul view. Suppose there are souls. I don't believe in them, but let's imagine. Suppose there are souls. And suppose that souls are the key to personal identity. So somebody is me if they've got my soul. Or, to put it more straightforwardly, next week the person that's me is the person with my soul. I survive as long as there's somebody around with my soul. A hundred years from now, am I still around? Well, if my soul's still around, that's me. That's what the soul theory says. And suppose it's the truth. Now, consider the following possibility. Suppose that people can be reincarnated. That is to say, at the death of their body, their soul takes over, animates, inhabits, gets connected to a new body that's being born. But, unlike the kind of reincarnation cases that get talked about in popular culture and various religions where, at least under the right circumstances, you can remember your prior lives, let's imagine that when the soul is reincarnated, it's scrubbed completely clean, no traces whatsoever of the earlier life. No way to retrieve it. No karmic similarities of personality or anything, just starts over like a blank slate. Like a blackboard that's been completely erased, we now have the very same blackboard, and now we start writing new things on it. Imagine that that's the way reincarnation worked. So somebody asks you, "Will you still be around in 1,000 years?" The answer's going to be, yes, because my soul will be reincarnated. In 1,000 years there'll be somebody that has the very same soul that's animating my body right now. Of course, that soul won't remember being Shelly Kagan. It won't have any memories of its prior life. It won't be like Shelly Kagan in any way in terms of Shelly Kagan's desires or ambitions or goals or fears. It won't be that--We can see why that personality emerges through karmic cause and effect in any way that are a function of what I was like in my life. It'll be Shelly Kagan, because it's Shelly Kagan's soul, but with no overlap of personality, memories, anything. Then I want to say, who cares? The fact that I will survive under those circumstances doesn't give me anything that matters to me. It's no comfort to me to be told I will survive, because after all, the soul is the key to personal identity, if there's no similar personality, no memories, no beliefs, no retrievable memories of past lives. Then who cares that it's me? If you can feel the force of that thought, then you're seeing how the question "Will I survive?" can be separated out from the question "What matters?" What do we care about? Bare survival of my soul, even though that is the key to personal identity--if it is--bare survival of my soul doesn't give me what I want. It's no more comforting or satisfying than if you said, "You know this knucklebone? After you die, we're going to do knucklebone surgery and implant that knucklebone in somebody else's body. And that knucklebone is going to survive." And I say, "Oh, that's very interesting that that knucklebone will be around 100 or 1,000 years from now. But who cares?" And if the knucklebone theory of personal identity gets proposed and somebody said, "Oh, yes, but you see, that person now with that knucklebone will be you, because the key to personal identity is having the very same knucklebone." I say, "All right, so it's me. Who cares?" Bare knucklebone survival does not give me what matters. Now, the knucklebone theory of personal identity is a very stupid theory. In contrast, the soul theory of personal survival is not a stupid theory. But for all that, it doesn't give me what I want. When you think about the possibility of bare survival of the scrubbed, clean, erased soul, you see that survival wasn't really everything you wanted. What you wanted--at least what I want, I invite you to ask yourself whether you want the same thing--what I want is not just survival, but survival with the same personality. So even if the soul theory is the correct theory of personality, it's not enough to give me what matters. What matters isn't just survival. It's survival with the same personality. Let's consider the body view. Suppose that the body theory of personal identity is correct. And to be me, there's got to be somebody there that's got my body. Let's suppose the brain version of the theory is the best version. And so next year, there's going to be somebody that's got my brain. But let's imagine that the brain has been scrubbed clean. All memory traces have been completely erased. We're talking complete irreversible amnesia, complete erasure of the brain's hard drive. No traces of desires and memories and intentions and beliefs to eventually be recovered if only we have the right surgery, or procedure, or psychotherapy, or what have you. It's gone. Now, that thing that wakes up after this complete irreversible amnesia will no doubt eventually develop a personality, a set of beliefs, memories. Nobody knows who it is, so they call it, they find it wandering on the streets. They call it John Doe. John Doe will eventually have a bunch of beliefs about how the world works, make some plans, get some memories. According to the body theory, that's me. And if the body theory is correct, well by golly, it is me. And all I can say in response to that is, it's me, but who cares? So what? I'm not comforted by the thought that I will still be around 50 years from now, if the thing that's me doesn't have my personality. Mere bodily survival isn't enough to give me what I want. I want more than mere bodily survival. I want to survive with the same personality. So even if the body theory of personal identity is the right theory, what I want to say in response to that is, "So what?" If the really crucial question is not "Do I survive," but "Do I have what I wanted when I wanted to survive?" the answer is the body theory doesn't give it. I don't just want to survive. I want to survive with the same personality. Should we conclude, therefore, that the key to the important question--namely, "What matters?"--the answer to that question, should we conclude, is, same personality? That's a question we'll have to take up next time. |
YaleCourses_Philosophy_of_Death | 20_The_value_of_life_Part_II_Other_bad_aspects_of_death_Part_I.txt | Professor Shelly Kagan: Last time, I invited you to think about life on the experience machine, where the scientists are busy stimulating your brain in such a way as to give you an exact replica, from the insides of what it would be like having identical experiences to the ones you would have if you were really doing--well, whatever it is that's worth doing. Climbing the alps, writing the great American novel, raising a great family that loves you, being creative. Whatever it is you think is worth having, the experience machine gives you all the experiential side of those things. But you're not really doing those things. You're actually just floating in the scientist's lab. And we ask ourselves, would you want to live a life on the experience machine? Would you be happy or would you be unhappy, to discover that you actually have been living a life on the experience machine? Most of us, when we think about this, find ourselves wanting to say, no, we wouldn't want to have a life on the experience machine. I've been discussing this sort of example for many, many years. And there's always a group of people who think, yes, life on the experience machine is perfect as long as you've got the right tape playing. But the vast majority always says, no, there's something missing from that life. It's not the ideal of human existences; it's not the best possible life we can imagine ourselves having. But that means, if we think something's missing, we then have to ask yourselves, what's missing? What's wrong with the experience machine? The one thing we can conclude immediately is, if you think life on the experience machine is missing something, that the hedonist--and views like hedonism--must be wrong, insofar as they say that all that matters for the best possible life is--for well-being--is getting the right kinds of experiences, getting the right kinds of mental states. Because by hypothesis, the experience machine gets the mental states right, get the insides right. So, if something's missing from that life, there's more to the best kind of life than just having the right mental states, than just getting the insides right. Well, we ask ourselves then, well, what's missing? I think different people will answer that in different ways. And if we had more time we could spell out rival theories of well-being, which could be interestingly distinguished one from another in terms of how they answer the question, "What's missing from the experience machine?" on the one hand, and "Why are the things that are missing from the experience machine worth having?" Different theories of well-being might answer that in different ways. Instead of trying to pursue those alternative theories in a systematic fashion, let me just gesture toward some of the things that seem to be missing from that kind of life. Well, first of all, and most, perhaps, obviously--if you're just spending your life floating in the scientist's lab, you're not actually accomplishing anything. You're not actually getting the things out of life you thought you were getting. You wanted to be climbing the mountain, but you're not actually climbing a mountain. You're just floating there. You wanted to be writing the great American novel, but you're not writing the great American novel. You're just floating there. You wanted to be finding the cure for cancer, but you're not actually finding the cure for cancer. You wanted to be loved, but you're not actually loved. You're just floating there. Nobody other than the scientist even knows that you exist. So, there's a variety of things you wanted. You wanted to know your place in the universe, but you don't even have that kind of knowledge either, because you think you're writing novels, finding the cure for cancer, climbing Mount Everest. You're completely deceived about all those things. So you don't have the kind of self-knowledge that many of us value. Well, as I say, different theories would try to systematize these examples in different ways; that we don't have any kind of accomplishments, we don't have knowledge, we're not in the right kinds of loving relationships. Different theories might have different explanations as to--are these things valuable because we want them, or do we want them because we recognize they're valuable? Rather than trying to pursue those questions--, And indeed, trying to work out the details of these views would be complicated as well. Take the example of accomplishment. Well, we all think accomplishment's important, but it's not as though any old accomplishment is important. If somebody sets themselves--or so it seems to me at least--if somebody sets theirself the goal of making the biggest rubber band ball in the Eastern United States, I suppose there's a sense of the word that that's an accomplishment if they've got it, but it doesn't strike me as the kind of accomplishment which makes for a particularly valuable life. So, we might have to distinguish between any old accomplishment and genuinely valuable accomplishments. But again, just put those details aside. We can say that there are certain things that are good above and beyond experiences--the right kinds of accomplishments, the right kind of knowledge. After all, not every bit of knowledge is equally valuable. It's one thing to know your place in the universe, or to know the fundamental laws of physics. It's another thing to know what was the average rainfall in Bangkok in 1984. I'm not clear that that kind of knowledge gives a whole lot of value to your life. So, we need the right kinds of accomplishments and the right kind of knowledge and the right kinds of relationships. But imagine you've worked that out. The crucial point is that it takes more to have the best kind of life than just getting the insides right. It also requires getting the outsides right--whatever that comes to--having in your life not just experiences but the right kinds of goods or accomplishments or whatever term we use for it. Now, let's say, instead of pursuing the questions of how exactly that theory should go, notice that if we had that theory we could still evaluate in principle--whatever the practical difficulties might be--in principle we could still evaluate rival lives. We could talk about adding up all the positive experiences along with all the--ask yourself how many goods, how many accomplishments of the right sort were in that life? And that's on the positive side of the ledger. And against that we would then have to subtract the sum total of the negative experiences, all the failures and deceptions or what have you. Those would count against the overall value of your life. We could still say it's--how good your life is, is a matter of adding up the goods and subtracting the bads. But we would now have a somewhat broader, or more encompassing or inclusive, list of goods, and a more broad and encompassing list of bads--not just experience, but also these various other accomplishments, whatever exactly that list comes to. So, we could still evaluate rival lives. My life would've gone better had I chosen to become a farmer instead of chosen to become a doctor. Or my life would've gone better for this period of ten years, but then it would've become worse. Or what have you. Or when we ask ourselves, how will things go for me over the next couple of weeks if I go on vacation versus staying back here? We add up the goods, subtract the bads--whatever our favorite list is--and we come to our best educated guess about the rival evaluations of not just lives as a whole, but chunks of lives. Now, what do those totals come to? Well, you might think it's an empirical question, and in fact I am inclined to think it's an empirical question, varying from person to person. But it's worth taking a moment to flag the fact that there are people, there are philosophers, who think we can generalize across all humans. You might say that optimists are people who think that for everybody in every case, in every circumstance, the total is always positive. "Life's always worth living; it's always better than non-existence." That's what the optimist thinks--not just for themselves individually, but for everybody, the total is always positive. Against that, I suppose, you've got pessimists--pessimists who say, "No, no. Although life perhaps has some good things, the overall grand balance is negative for everybody in every circumstance. We'd all be better off dead, or perhaps more accurately still, all be better off never having been born in the first place." That's what the pessimists say. And in between the optimists on the one hand and the pessimists on the other, you've got moderates who say, "It varies. And for some people the balance is positive, for some people perhaps the balance is negative, whether for their life as a whole or for certain stretches of their lives." We then have to get down to facts about cases, try to describe the instance, perhaps somebody who's in the terminal stages of some illness where they're in a great deal of pain. And the various other external goods of life, they can't--because they're bedridden, they can no longer accomplish things, perhaps their family has abandoned them. Whatever the details might be, we could describe lives and say, whether or not their life was good as a whole, what the future holds out for them is negative. That's what the moderates would say. It varies from case to case. Well, however we settle that issue, notice there's still one other assumption that all these positions still have in common. We've expanded our list of goods so that--Nobody's going to deny that among the goods of life are pleasure and other positive experiences. And among the bads of life are pain and other negative experiences. But we've expanded the list of goods so it includes external goods and not only experiential or internal goods. Still, the views that I've been sketching all still have the following assumption in common. How good it is to be alive is a matter of adding up all of the--call it the contents of life. Add up your experiences and your accomplishments and the particular details of your life as what the story is about. It's as though we've been assuming, and I have been assuming up to this moment, that being alive per se has no value. It's--life itself is a container which we fill with various goods or bads. And deciding how valuable it is, how good it is for me to be alive is a matter of adding up the value of the contents of the life. But the container itself is a mere container. It has no value in and of itself. We could say that what I've been presupposing up to this point is the "neutral container theory" of the value of life. Hedonism is a version of the neutral container theory. How valuable--how well off you are, how valuable your life is, is a function of the contents, the pleasure and the pain. We've expanded the list of goods that can go within your life, but for all that, we've still been acting as though the neutral container theory is the right approach. But against this there are those who think, no, in addition to thinking about the value of the content of life, we have to remember--so these people claim--that life itself is worth having. There's a benefit to me above and beyond the question of what's going on within my life--am I loved, am I accomplishing things, am I having nice experiences or not? Above and beyond the question of the contents of my life, we have to remember that the mere fact that I'm alive gives my life some value. So, these are "valuable container" theories. Now, think about what it would mean to accept a valuable container theory. You're saying that being alive per se has some positive value. Well, actually, the first remark is, probably wouldn't be completely accurate to say, to describe these views as saying, "It's being alive per se." After all, a blade of grass is alive, and I presume that even fans of the, what we might call valuable container theories, don't think that, "Oh, wouldn't it be wonderful if--as long as I was alive in the way that a blade of grass is alive." Life may have value in and of itself, but it's not mere life. What we want is the life of a human. We want a life in which we're accomplishing things, there's agency, and the life of knowing things. Because you have to be a knower in order to have knowledge. The life of somebody who can have an emotional side. So, it's something like the life of a person that, when we say, when there are people who are inclined to say, that being alive per so is valuable, presumably what they mean is being alive as a person per se is valuable. All right. Note that point; keep it in mind. For simplicity I'll talk about these views as though they say life per se is valuable. Actually, I suppose there could be a more extreme view still. It seems implausible to me, but I suppose it's worth noticing there are people who think, "No, being alive per se, right--even though there I am and my brain has been so thoroughly destroyed that I'm not longer able to know anything, no longer able to relate emotionally to anybody, no longer able to accomplish anything, there I am in a persistent vegetative state, but at least I'm alive." You can imagine somebody who has that view. I've got to say I find that a pretty implausible view. So I'm going to restrict myself, at least when I think about it, to versions that say, it's the life of a person per se that's valuable. Now, notice that if we accept this view to decide how well off I am, or somebody else is, you can't just add up the contents of the life. You can't just add up all the pleasures and subtract the pains, or add up all the accomplishments and subtract the failures, or add up all the knowledge and subtract the ignorance and deception. Doing that in terms of the contents gives you a subtotal, but that subtotal is no longer the entire story. Because we also have to add in, if we accept a valuable container theory, we also have to add in some extra positive points to take account of the fact that, well, at least you're alive or have the life or a person--or whatever it is that you think is valuable in and of itself. So first we get the content subtotal; then we add some extra points for the mere fact that you're alive. Now, notice that since we are adding extra positive facts, extra positive points, for the fact that you're alive, even if the contents subtotal is negative, the grand total could still be positive. Suppose that being alive per se is worth plus a hundred points, just to make up some number. Even if your content subtotal was negative ten, that doesn't mean you're not better off alive, because negative ten plus the extra hundred points for the mere fact that you're alive is still going to give you a positive total, plus 90. So, the point of thinking about the possibility of accepting a valuable container theory is to remind us that in deciding are you better off dead, has death deprived me of something good or not, it's important to not just focus on the contents but to also remember to add some positive points above and beyond the content subtotal to take into account the value of the sheer fact that you're alive. If you're a fan of the neutral container theory, you won't have anything extra to add, because life per se is just a zero. It's strictly a matter of the contents. But if you accept a valuable container theory, you have to add something more. And so even if, you might say, the way my life is going in terms of its contents is bad, being alive per se might still be a good thing. Have to add some extra points. How much extra? Well, here we're going to have, of course, more modest and more bold versions of the valuable container theory. Let me just distinguish two broad types. What we might call modest versions of the valuable container theory say, although being alive per se is good, if the contents of your life get bad enough, that can outweigh the value of being alive so that the grand total is negative. Modest container theories, that is, say there's a value to being alive, but it can in principle be outweighed. Whether it gets outweighed easily, or whether it's very, very difficult and the contents have to be horrible to outweigh it, depends on how much value you think being alive per se has. So, those are modest theories--positive value for life, but it can be outweighed. Against that, you can imagine someone who thinks being alive per se is so incredibly valuable that no matter how horrible the contents are, the grand total will always be positive. It's as though being alive is infinitely valuable in comparison to questions about the contents. We could call this the "fantastic valuable container theory" as opposed to the "modest valuable container theory." I suppose that label gives away where I want to come down on this. I find the fantastic valuable container theory fantastic in the sense of incredible. I can't bring myself to believe it, which--I have some sympathies for valuable container theories, but I also have some sympathy for neutral container theories. Sometimes I'm drawn toward the neutral view; sometimes I'm drawn toward the thought that being alive per se is good for you. But even in those moments when I'm drawn towards valuable container theories, it's always the modest version. I don't find myself drawn toward the fantastic version. Now, if we make these distinctions, then again, remembering that the question we've been asking ourselves is, "So why is death bad?" The deprivation account says, death is bad for you insofar as, or it's bad for you when, by virtue of dying now, what you've been deprived of is, another chunk of life that would've been good for you to have. And what we now see is that--to see whether that could be the case or not, we've got to get clear in our own minds about whether we believe in a neutral container theory, a positive, valuable container theory or--and among those, between a fantastic and a modest container theory. If we are neutralists, we're going to say, the question is, what would the contents of my life have been, for the next year, ten years, whatever? If that would've been worth having, then--if the next chunk of my life would've been worth having--then it's bad for me that I die now instead of living for the next ten years. On the other hand, if the balance from here on out would've been negative, then it's good for me that I died now instead of being kept alive with a life not worth living. That's how the neutralists put it. If we are valuable container theorists, we think the answer has got to be, well, look at the contents, but don't forget to add some extra points, even if the next five years for you would've been, in terms of the contents, modestly bad--perhaps the value of at least being alive at all outweighs it, so it still would've been better for you to be alive. But if the contents get bad enough, then you'd be better off dead. Notice that on the modest view, if we ask ourselves, would it have been good to be immortal? the answer's going to depend on not just whether we accept Bernard Williams' claim that immortality would be bad for you, because we now realize that what Williams was talking about was the contents of an immortal life. And that's no longer an adequate view, or at least it's no longer a complete story, if we are valuable container theorists. We could say--you could imagine somebody saying, "Oh yes, you're right, Williams, the contents get negative, but that's still outweighed by the mere fact that you're alive. So on balance, being immortal is a good thing." Whether that's right or not depends on just how bad would it be to be immortal. Because, of course, if you're a modest, if you accept the modest version of the valuable container theory, then if the contents get bad enough, that can outweigh the positive value of life. Against that, fans of the fantastic valuable container theory can say, it doesn't really matter whether Williams is right. Even if being immortal would become horrendously boring and tedious or worse, it doesn't matter. The value of being alive per se outweighs that. So you're always better off being alive. So more life would always be better, no matter how horrible the contents might be. So being immortal really would be a good thing for you. Death always is a bad thing. That's what you can say if you accept the fantastic container theory. I don't find the fantastic container theory myself--I don't find it particularly attractive. I'm inclined to think not only that--not only that the contents of life would be bad, eventually, for all of us if we were immortal--but that it would be bad enough to outweigh whatever value, whatever positive value being alive per se may have for us. So, I'm inclined to think, eventually immortality would always be bad overall. But let me remind you that saying that does not rule out the possibility of consistently going on to say that even though it's a good thing that we die, because eventually immortality would be horrible--for all that, death could still come too soon. It could still be the case that we die before life has turned bad. We die while it's still the case that living another ten years or twenty years--or for that matter five hundred years--would still or could still have been good for us. It's compatible with thinking that immortality would be bad to think that in fact death comes too soon. But of course, we now have a return of the division between moderates, optimists and pessimists. You might say, optimists are those--now in this more chastened version of optimism, optimists say, "Even if immortality would be bad eventually after a million years or ten million years or what have you, the next chunk of life would've been good for all of us." So that death--they're optimists in this strange sense, if they think life would've been good, which means of course that that we die is bad for us. Because we all die too soon. That's what the optimists might say. Against that, the pessimists might say, "Boy, death comes not a moment too soon for any of us. The next chunk of life is always not worth having, always worse than nothing." And in between these two extremes are the moderates, who say, "For some of us, death comes too soon. For some of us, death does not come too soon." There's a quote I want to read. It's actually out of place now. I should have read it a lecture or two ago when I started talking about immortality, but I misplaced it. So, I found it this morning. So before I just leave the subject of immortality, let me conclude with some words of wisdom from a former Miss USA contestant. She was asked the question, "Would you want to live forever?" And she responded, "I would not live forever, because we should not live forever. Because if we were supposed to live forever, then we would live forever. But we cannot live forever, which is why I would not live forever." Isn't that nice? All right. So I've been talking for, actually now a couple of weeks I suppose, about the central badness of death. Why is it that death is bad for me? And the answer I propose is the deprivation account. The central bad thing about the fact that I'm going to die is the fact that because I'll be dead I'll be deprived of the good things in life. And we've now seen that that's a bit crude, right? We have to not talk--just talk about the good things in the life, but the good of life itself, and we have to notice that perhaps on certain views, for certain cases, it's not really the case that when I die I'm being deprived of a good life. Because the next chunk, or perhaps from there on out, it would've been bad. But still, details and complications of the sort we've been considering aside, the fundamental badness of death is that it deprives me of life worth having. But although I've been at pains to say this is the fundamental bad thing about death, I think it's arguable that--I think one could make the case that this isn't the only bad thing about death, even if we're focusing on why is death bad for me? There are other features of death, as we experience it, that are separable from the deprivation account, that at least add to the way that death occurs for us, where we then have to ask the question, does this add to the badness of death? Or conceivably for some of these things, perhaps it mitigates it; it minimizes it in one way or another. So, what I want to do is take at least a couple of minutes and pursue some of these extra features as well. Here's an example. It's not merely the fact--it's not merely true that you're going to die. It's inevitable that you're going to die. There's no avoiding the fact that you're going to die. I mean look, you're all going to college, but it wasn't inevitable that you go to college. Had you chosen not to, you could've avoided going to college. But it doesn't matter what you choose, you can't avoid dying. So it's not just merely the case that in fact we are all going to die; it's a necessary truth that we're all going to die. So we might ask, what about this inevitability of death? Does that make things worse? And here I want to distinguish between the individual question about the inevitability of death, and the universal question. So just start by thinking about the fact that it's unavoidable that you're going to die. Does the unavoidability of death make it better or worse? And the interesting thing is, I think you can see--you can get a feel for both possible answers here. On the one hand, you can imagine somebody who says, "Look, it's bad enough that I'm going to die, but the fact that there's nothing I could do about it just makes it worse. It's like adding insult to injury that I'm powerless in the face of death. I cannot escape the Grim Reaper. This sheer powerlessness about this central fact about the nature of my existence is an extra insult added to the injury." Against that, however, there are those people who'd want to say, "No. Actually, the inevitability of my death reduces the badness." You all know the expression, "Don't cry over spilt milk." Right? That what's done is done. You can't change it. What you can't change, loses--when you focus on the fact that you can't change it, it loses some of its grip to upset you. Well, if that's right, and if we then realize that there's nothing I can do about the fact that I'm going to die, then perhaps some of the sting, some of the bite, is eliminated. It's as though you try--try getting upset about the fact that two plus two equals four. Try feeling upset at your powerlessness to change the fact that two plus two equals four. Suppose you wanted two plus two to equal five. Can you work up anger and regret and dismay over that? Well, most of us, of course, can't. Because when we see that something is just necessary, we--it reduces the sting of it. The philosopher Spinoza thought that if we could only recognize the fact, what he at least took to be the fact, that everything that happens in life is necessary, then we'd get a kind of emotional distance from it; it would no longer upset us. We could no longer be disappointed, because to be disappointed in something presupposes that it could've been some other way. And Spinoza thought if you see that it couldn't go any other way, then you can't be sad about it. Well, if we see that our death is inevitable and we really internalize that fact, perhaps that would reduce the badness of it. Well, maybe that's right, but going back to the firsthand, I don't know how many of you have read Dostoyevsky's short novel The Underground Man. The Underground Man is upset about--if I remember correctly--he's upset about the fact that two plus two equals four and there's nothing that he can do about it. So he curses existence, curses God at having made him so impotent that he can't change the fact that two plus two equals four. And another philosopher, Descartes, in thinking about God's omnipotence, thought that it wouldn't be good enough if God as omnipotent couldn't change the facts of mathematics. And so he imagines that God, as omnipotent, could've made two plus two equals five. And so it's a kind of--;So, it is indeed a fact of our powerlessness that we're stuck with the necessities. God isn't stuck with them. And so Dostoyevsky takes that thought and runs with it and says, "Yeah. It doesn't help to say that it's inevitable. It makes it worse." Well, there's both sides. And as I say, I myself, in different moods, get pulled in both ways. What about the fact that not only is it inevitable that I'm going to die, it's inevitable that we're all going to die. Does the universality of death make things better or worse? And again, you can sort of feel the pull both ways. On the one hand you say, it's bad that I'm going to die, but I'm not a monster. It makes me feel even worse that everybody else is stuck dying--or perhaps we should say dying too soon in light of our discussion about immortality. It's a pity that most everybody, or perhaps everybody, dies too soon. That makes it even worse. On the other hand, you know, let's be honest here, we also know the expression, "Misery loves company." And there's at least some comfort to be had, isn't there, in the realization that this thing isn't just true for me. It's not like the universe has singled me out for the deprivation of dying too soon. It's something that it does to everybody. So perhaps there's some comfort in the inevitability of death. Well, here's a different aspect of death worth thinking about. What about the variability of death? After all, it's not just the case that we all die. And I'll stop saying die too soon. Let's just suppose we understand that clause to be implied in what I'm saying. It's not just the case that we all die. There's a great deal of variation in how much life we get. Some of us make it to the ripe old age of 80,90 a 100 or more. Others of us die at 20, or 15, or 10, or younger. Even if death were inevitable, it wouldn't have to come in different-sized packages. That is, it wouldn't have to have variability. We could imagine a world in which everybody dies--everybody dies at the age of a hundred. Does it make things worse or better that there's this kind of variability? From the moral point of view, I suppose, it's fairly straightforward to suggest it makes things worse. After all, most of us are inclined to think that inequality is morally objectionable. It's bad that, through no fault of their own, some people are poor and other people are rich. If inequality is morally objectionable, then it's very likely we're going to think it's morally horrendous that there's this crucial inequality: some of us die a the age of 5 while others get to live to 90. But in keeping with the focus of our discussion about the badness of death, I want to put aside the moral question and think about how good or bad for me is it that there's variability in death? Well, we might say, let's look at it from two basic perspectives, those who get less than the average lifespan and those who get more than the average lifespan. From the point of view of somebody who gets less, this is obviously a bad thing. It's bad enough that I'm going to die too soon. I said I wasn't going to keep saying that remark, and here I am saying it anyway. It's bad enough that I'm going to die. But what's even worse is I'm going to get even less than the average amount of life. That's clearly an extra-bad. But we might then wonder, for every person who gets less than the average amount of life--suppose we take the median, take the amount of life that's exactly, 50 percent of the people get more, 50 percent of the people get less. For every person who has less than the median amount of life, there's another person who has more than the median amount of life. That person gets to say, hey. Well, you know, it's a pity that I'm going to die or die too soon, but at least I'm getting more than the average. That's a plus. So perhaps these two aspects balance themselves out. There are people who are basically screwed by the fact that they get less than the average amount and people who are benefited by getting more than the average amount. So perhaps in terms of the individual badness of death that's a wash. Maybe. Except it seems to me it's a further fact about human psychology that we care more about being short-changed than we do about being, as we might put it, overcompensated. I rather suspect that people who have less than the average of something, it hurts them more than it benefits the people who have more than the average of something. And if that's right--and that seems likely to be the case, especially for something like death--the extra bad of the fact that there's variability and so some people get less than average--that extra bad, I suspect, outweighs the extra benefit of some people having more than average. Well, let's consider a different feature. We've had inevitability; we had variability. What about unpredictability? Not only is it inevitable that you're going to die; not only do some people live longer than others, you don't know how much more time you've got. Now, you might think, well, didn't we already discuss that when we started thinking about variability? But in fact, logically speaking at least, variability, although it's a requirement for unpredictability, doesn't guarantee unpredictability. You could have variability with complete predictability. Imagine that when everybody's born, on their wrist everybody's born with a natural birthmark that indicates the precise year, day, and time in which they're going to die. We could imagine a world like this where it's inevitable; everybody's got some date on it. And for that matter, there could still be variability. Some people live 80 years, some people live 20 years. But there's no unpredictability. Because of the birthmark, everybody knows exactly how much longer they've got. Well, so in our world we don't have that. In our world, not only do we have variability, we've got unpredictability. Does that make things better? Or does that make things worse? Would it be better to know when you were going to die? Well, one way in which unpredictability at least has the potential of making things worse is this. Because you don't know how much more time you've got--You can make a guess based on statistics, but as we saw, there's wild unpredictability. You can think, look, "the average lifespan in the United States is whatever it is, 82 years. So I probably have, you guys are in your 20s--you know 20--roughly another 60 years are going." And as you're busy calculating all this, you're walking across Chapel Street and you get hit by a truck and you die. Right? Because of unpredictability, you can't really know. And because you can't really know, it's difficult to make the right kinds of plans. And in particular, it's hard to know how to pace yourself. You decide to go off to medical school, become a doctor. And so not only do you put the time into college, you put the time into medical school, and you put the time into your residency and you put your time into your internship. And that's a very long commitment. It's a long-term plan, which can go wrong if you get sick and die in your early 20s. Well, that's a rather dramatic example, but the same sort of thing in principle can happen to all of us. You make a life plan, what you want to accomplish in your life, and well, obviously enough, some of us will die too soon, not just in terms of, "oh, well, life still could've had good things," but too soon in terms of you didn't get where you wanted to get in terms of your life plan. If only you'd known you were only going to have 20 more years instead of 50 more years, you would've picked a different kind of life for yourself. The unpredictability makes it worse. And indeed, less obviously, it can work the other way as well. You make a life plan, and then, you know, you don't die yet. You continue to stick around, and then your life has this feeling of--at least we can imagine this happening--being sort of anticlimactic. You peaked too soon. If only you'd known you had another 50 years, that you weren't going to die young--or James Dean, going to burn out fast and die young--if only you'd realized you were going to live to the ripe old age of 97, you would have picked a different life for yourself. Now, in thinking about these points, in effect I'm suggesting that the value of your life--so , we previously were talking about different theories of well-being and what makes for the best kind of life. Here we have yet another kind of feature that we haven't talked about. We might think of it as, the overall shape of your life matters. What we could also call "the narrative arc of your life" matters. Let me illustrate the point with some very, very simple graphs. These are not meant to be realistic, but they'll give you the idea. So, we all know the Horatio Alger story right? Somebody starts out poor and makes his way through hard work and dedication and effort to riches and success. Rags to riches--that's a wonderful, inspiring life. Let's draw the graph of that life. So here's how well off you are, here is time, and you start with nothing and you end up incredibly well off. That's a great life. That's the Horatio Alger life--H.A. Great life. All right. Now, consider the following story. Here are the axes again. Instead of the rags to riches life, imagine the riches to rags life. Starts off with everything, ends up with nothing. That's the Algers Horatio story. It's the reverse. Now, I doubt if there's anybody here who is indifferent between the choices, indifferent with regard to the choice between these two lives. I imagine that everybody here prefers this life. But notice that in terms of the contents of the life, at least the local contents, it's a bit hard to see why that would be the case, right? We've got equal periods of suffering and doing slightly better and slightly better and slightly better--equal periods of success and suffering. For every bad period here there's a corresponding bad period here. For every good period here there's a corresponding good period here. In terms of the contents of your life, being crude but you see the point, in terms of the contents of your life--equally good. And even if we accept the valuable container theory, and so we say, "Hah, you know, being alive per se is worth something as well." Well, you're alive for equal periods of time. So the extra points get added either way. You might say, look, if we're not indifferent between these two lives, that's because we think the overall shape of your life matters as well. The narrative arc, as I put it. The story "bad to good" is the kind of story we want for ourselves, while the story "good to bad" is the kind of story we don't want for ourselves. Interesting question. Why is that? And this of course should remind us of the puzzle about Lucretius. Why do we care more about future non-existence than past non-existence? When the bad is behind us, that seems less bothersome than when the bad is in front of us. You may remember the story from Derek Parfit about having the painful operation. Was it going to be in the future or did it take place earlier today? You don't remember. We're not indifferent. We want the bad behind us, not the bad in front of us. So, whatever the explanation is, we care about the overall shape and trajectory of our life. Now, that being the case, we have to worry then that because of the unpredictability of death that our lives may not have the ideal shape. A lot of us might feel that a life like this, where we peak but then we stick around--you know, isn't--can at least fail to be as desirable in which we end with a bang. If you start thinking about narrative arcs--imagine a novel, right? It's one thing to have--it's not to say that the best--if you want your life to be like the plot of a great story, it's not as though you think, "All right, the dénouement must occur at the very last page." It's okay to stick around for a while, but if the high point of the story occurs in chapter 2 and then there are another 67 chapters after that, you think, this was not a well-constructed novel. And insofar as we care about the overall shape of our lives, we might worry about wanting it to have the right shape overall. Where and when do you want to peak, as it were, in terms of your accomplishments? Well, that matters to us, but the trouble is, without predictability you don't know where to put the peak. Because if you try to aim for peaking later, you might not make it to that. If you put it too soon, you might stick around for longer than that, and then the peak has come too soon. All of this suggests then that the unpredictability of our death adds an extra negative element. It makes it harder to plan what the best way to live my life would be. And from that perspective it looks as though it would be better to know how much time you've got left. But then we have to ask--so I'll throw the question out and we'll call it a day, start with this next time--then we have to ask, would it really be better to know? Would you want the birthmark? Would you want to know exactly how much time you've got left? All Right. See you next time. |
YaleCourses_Philosophy_of_Death | 6_Arguments_for_the_existence_of_the_soul_Part_IV_Plato_Part_I.txt | Professor Shelly Kagan: At the end of last class, we started sketching an argument that comes from Descartes, the Cartesian argument, that says merely by the process of thinking, on the basis of thought alone, it tends to show that the mind--We all agree that there are minds. What the argument attempts to show is that this mind must be something separate from my body. And what's amazing about the argument is that it works on the basis of a pure thought experiment. The thought experiment, you recall, was one in which I imagine, I tell myself a story in which what I'm doing is I'm imagining my mind existing without my body. It doesn't seem especially difficult to do that. But then, we add this extra philosophical premise. If I can imagine one thing without the other, then it must be that those are two things. So my mind must not be my body. My mind must not be the same thing as my body or a way of talking about my body, because of course if my mind just was--talking about my mind just was--a way of talking about my body, then to try to imagine my mind without my body would be trying to imagine my body without my body. And that, obviously, can't happen. Look. Suppose we try to imagine a world in which Shelly exists but Kagan doesn't. You can't, right? Because of course, they're just a single thing, Shelly Kagan. And so if you've imagined Shelly existing then of course you're imagining that single thing, Shelly Kagan, existing. And if you imagine Kagan not existing, then you're imagining that single thing, Shelly Kagan, not existing. So you can't even imagine a world in which Shelly exists but Kagan doesn't. Now, it's important not to be confused about this. We can easily imagine a world in which I don't have the last name Kagan or perhaps to switch it around, Shelly's not my name. Suppose my parents had named me Bruce. Nothing would be easier. Imagine a world in which Kagan exists, but Shelly doesn't exist, because nobody in the world is named Shelly. The question is not, "Can you imagine me with a different name?" Bruce instead of Shelly, easy enough. It's rather, "Can you imagine a world in which the very thing that you really are picking out when you refer to me by the name Shelly--namely this thing--can you imagine a world in which that thing exists, but the thing that you're picking up when you use the word Kagan does not exist?" And that you can't do, because in the real world of course Shelly and Kagan pick out just two different names of this very same thing. This thing right here. So imagining a world in which Shelly exists but Kagan doesn't or Kagan exists but Shelly doesn't, that's trying to imagine a world in which I exist but I don't. And that's, of course, incoherent. So if you can--On the other hand, contrast. Can I imagine a world in which my left hand exists, but my right hand doesn't? Easy. Why is it so easy? Because of course there's two different things. Of course, that doesn't mean that in the real world one of them does exist and the other one doesn't. But it does show that in the real world they are two different things. That's why I could imagine a world with one but not the other. Try to imagine a world in which somebody's smile exists but their body doesn't. You can't do it. You can't have the smile without the body. And of course, no mystery about that. That's because the smile isn't really some separate thing from the body. Talking about smiles, as we've noted before, is just a way of talking about either what the body can do or what a certain area of the body can do. You can try to imagine it. In Alice in Wonderland, the Cheshire Cat disappears and all we have left, the last thing that disappears, is the smile. But of course, when you imagine the Cheshire Cat only having the smile there, you're still imagining the cat's lips, teeth, maybe tongue, whatever it is. If you try to imagine a smile with no body at all, it can't be done. Why? Because the smile isn't something separate from the body. "Try to imagine my mind," says Descartes, "without my body." Easy. From which it follows that my mind and my body must not be one thing. They must, in fact, be two things. That's why it's possible to imagine the one without the other. So this Cartesian argument seems to show us that the mind is something separate from, distinct from, not reducible to, not just a way of talking about, my body. So it's got to be something extra above and beyond my body. It's a soul. That's what Descartes argued. And as I say, to this day, philosophers disagree about whether this argument works or not. I don't think it does work and in a second I'll give you a counter example. And then, having given the counter example… That is to say, what I'm going to give is an example of an argument just like it, or at least an argument that seems to be just like it where we can pretty easily see that that argument doesn't work. And so something must go wrong with Descartes' argument as well. Well, here's the counter example. Some of you, I'm sure most of you, maybe all of you, are familiar with the Evening Star. The Evening Star is the, roughly speaking, first heavenly body that's visible in the sky as it gets dark, at least at certain times of the year. And I'm sure you're also familiar then with the Morning Star. The Morning Star is that heavenly body which is the last heavenly body that's still visible as dawn comes in and it begins to get light. So as a first pass, the Evening Star is the first star that's visible and the Morning Star is the last star that's visible at the right times of the year. The world that we live in has both the Evening Star and the Morning Star. But try to imagine a world in which the Evening Star exists, but the Morning Star does not. Seems fairly straightforward, right? I get up in the morning as dawn's approaching. I look around and the Morning Star is not there. There is no star where the Morning Star had been or where people have claimed it would be or something. But the Evening Star still exists. When I go out as sun sets and dusk falls, there is the Evening Star. So, as I say, it's a trivial matter to imagine a world in which the Evening Star exists and the Morning Star does not. And so we've got a--we could imagine then a--Descartes-like argument saying, "If I can imagine the Evening Star without the Morning Star, that shows the Evening Star and the Morning Star must be two different heavenly bodies." But in fact, that's not so. The Evening Star and the Morning Star are the very same heavenly body. In fact, it's not a star at all. It's a planet. It's Venus, if I recall correctly. So look, there's only one thing. The Evening Star is Venus. The Morning Star is Venus. So there couldn't be a world in which the Evening Star exists, but the Morning Star doesn't, because that would be a world in which Venus exists and Venus doesn't exist. Obviously, that's not possible. Of course what you can imagine is a world in which Venus isn't visible in the morning. Still, that's not a world in which the Morning Star doesn't exist, given that what we mean by the Morning Star is that heavenly object, whatever it is, that in this world we pick out at that time in the morning looking up at the sky. So when I refer to the Morning Star, I'm talking about Venus, whether or not I realize it's Venus. When I talk about the Evening Star, I'm referring to Venus, whether or not I realize that Venus is the Evening Star. So as long as Venus is around, well, there's the Evening Star, there's the Morning Star, there's Venus. You can't have a world in which the Morning Star doesn't exist but the Evening Star does. Although you could have a world in which Venus doesn't show up in the morning. Still, from the fact that I can imagine the world in which I look around for the Morning Star--there it isn't. I look around for the Evening Star--there it is. You might have thought that showed--didn't Descartes prove to us that that shows--the Evening Star and the Morning Star are two different things? Well no, obviously it didn't. So let's think about what that means. So we've got this argument that Descartes puts forward. I can imagine my mind without my body. And Descartes says that shows that, in fact, my mind is something separate from my body. Well, I can imagine the Evening Star without the Morning Star, so Son of Descartes, "Descarteson," has to say, "Oh, so that shows that the Morning Star and the Evening Star are two different things." But "Descarteson" would be wrong when he says that. The Morning Star and the Evening Star aren't two different things. They're just one thing, namely Venus. In fact, the sentence, "They are one thing," is slightly misleading, right? It's just one thing, Venus. If that argument, if the argument--If trying to run the Cartesian argument for astronomy fails, yet it seems to be an exactly analogous argument, we ought to conclude that the argument for the distinctness of the mind and the body must fail as well. Now, that seems to me to be right. I think the Cartesian argument does fail. And I think the example of the Evening Star and the Morning Star--which is not at all original to me--that this example shows, this counter example shows, that Descartes' original argument doesn't work either. At least, that's how it seems to me, though as I say, there are philosophers that say, "No, no. That's not right. Maybe somehow we misunderstood how the argument goes and it doesn't exactly--although these two arguments seem parallel, they're not, in fact, parallel. There's some subtle differences that if we're not looking carefully, we'll overlook." But, as I say, the debate goes on. One of the reasons for thinking it's not clear whether the argument fails or not is because it's hard to pin down, where exactly did it go wrong? Look, take the argument of the planets, the Morning Star and the Evening Star example. I take it that we all agree that when we attempt to run the Cartesian argument in terms of the Morning Star and the Evening Star, it fails. But it's harder to say what went wrong? How did it go wrong? Why did it go wrong? What are the possibilities? Well, we said, look, first claim, first premise. I can imagine a world in which the Evening Star exists, but the Morning Star doesn't. Well, I suppose one possible response would be, "You know, you couldn't really do that. You thought you were imagining a world in which the Evening Star exists and the Morning Star doesn't, but you weren't really imagining a world in which the Evening Star exists and the Morning Star doesn't. You misdescribed what it is you've imagined." That's not a silly thing to say about the astronomy case. Maybe that's the right diagnosis. Could we similarly say, "I didn't really imagine a world in which my mind exists but my body doesn't"? That little story I told last time, I thought I was describing a world in which my mind exists and my body doesn't, but it wasn't really imagining a world like that. That doesn't seem so persuasive over there. It did seem as though I was imagining it. What else could go wrong with the astronomy example? Well, maybe I did imagine a world in which the Morning Star exists and the Evening Star doesn't exist, but maybe imagining doesn't mean it's possible. Normally, we think, if we imagine something, it means it's possible. Here I don't mean, of course, empirically possible. I could imagine a world with unicorns. It doesn't mean I think unicorns are physically possible. All we mean here is logically possible. I can imagine a world with unicorns. It seems to follow that unicorns are logically possible. Imagination seems to be a guide to possibility; but maybe not always. Maybe sometimes we can imagine something that's really impossible. Try to imagine--can you do that or can you not do that?--try to imagine a round square. Can you imagine it? Can you not imagine it? In certain moods, I sort of feel I can just begin to imagine it. Of course, it doesn't really mean it's possible. It seems like it's impossible. So maybe imagination is a flawed guide to possibility. So maybe that's what we should say about the mind-body case. "Yeah, I can imagine a world in which my mind exists but my body doesn't. But that doesn't show that it's really possible, logically possible to have a world in which my mind exists and my body doesn't." Maybe that's where the argument goes wrong. On the other hand, isn't imagination our best guide to logical possibility? Isn't the reason I think unicorns are logically coherent is because I can imagine them so easily? Another possibility. Maybe we should say, the mere fact that it's possible for A and B to be separate--for A to exist without B for example, that's clearly where they're separate--the mere fact that it's possible for them to be separate doesn't mean that in the actual world they are separate. Maybe the argument goes wrong by assuming that identity--when A is equal to B, it's always equal to B, no matter what. Maybe identity, as philosophers like to put it, maybe identity is contingent. Maybe A could be the same thing as B in this logically possible world, but we could imagine a completely different logically coherent world in which A was not the same thing as B. If that's right, then maybe the conclusion should be "well, you know, yeah, the Cartesian thought experiment shows that there could be a world in which there are minds that are not identical to bodies. But that doesn't mean that in this world the mind is not identical to my body. Maybe in this world, minds and bodies are identical, even though in other logically possible worlds the identity comes apart. Identity is not necessary, but contingent, as the philosophers put it." It's not clear that that's right either. The notion of contingent identity is very puzzling. After all, if A really is B, how could they come apart? There's only one thing there. There's nothing to come apart. There's just A equals B, that single thing. What's to come apart? So where exactly does the argument break down? Is it that I'm not really imagining? I'm just thinking I'm imagining? Is it that imagination's not really a good guide to possibility? I just--Often it is, but not always. Is it that identity is contingent? The interesting thing about Descartes' argument is that it's easy to see something has gone wrong in the case of the Morning Star and the Evening Star, but it's difficult to pin down what exactly went wrong. Different philosophers agree that something's gone wrong in the Morning Star and the Evening Star case, but disagree about the best diagnosis of where the mistake went in. Armed with your pet diagnosis of where the argument goes wrong there, you've got to ask, "Does it also go wrong in the mind and body case?" Well, we could spend more time, but I'm not going to. I think Descartes' argument fails. I think the Morning Star, Evening Star case shows us that arguments like this, at the very least, can't be taken at face value. Just because it looks as though we can imagine it and just because it seems as though from the fact that we can imagine one without the other, it just won't necessarily follow that we really do have two things that are separate and not identical in the real world. I'd be happy to discuss with you, outside class, at greater length my favorite theories as to where the argument goes wrong and why I think it goes wrong in Descartes' case as well. But I suggest that the argument goes wrong. It's not right. And so, Descartes' attempt to establish the distinctness of the mind, the immateriality of the mind, on the basis of this Cartesian thought experiment, I think that's unsuccessful. Well, we've spent--Let's step back and think of where we've been. We've spent the last week and a half or so, maybe a bit more, two weeks, talking about arguments for the existence of the soul. And unsurprisingly--since I announced this was going to be the result before the class had barely gotten started--I don't think any of these arguments work. I believe the attempts to establish the existence of a soul, an immaterial object, the house of consciousness separate and distinct from the body, I think those arguments fail. But I recognize that this is something that reasonable people can disagree about. And so this is, as will be many times the case over the course of this semester, something that I invite you to continue to reflect on for yourself. If you believe in a soul, what's the argument for it? Well, what we're about to turn to is Plato's discussion of these issues in the dialogue the Phaedo, which, as I told you last week, purports to lay out the final day's discussion with Socrates before he is killed by--he kills himself--by drinking the hemlock in accordance with the punishment that's been given to him. Now, in the course of this discussion, Socrates and his disciples argue about not so much the existence of the soul, but the question really is the immortality of the soul. After all, even if you believe in a soul, as I have remarked previously, that doesn't give us yet any reason to believe the soul continues to exist after the death of your body. The kind of dualist position that we are considering in this class is an interactionist position, where the soul commands the body. That's what makes my fingers move right now. And the body can affect the soul. If I poke my body, I feel it in my mind. So the mind, the soul, and the body are obviously very tightly connected. And so it could be--even if the soul is something separate from the body--that when the body dies, the soul dies as well. That's the question that's driving the discussion in the Phaedo. Do we have any good reason to believe the soul survives the death of the body? And more particularly still, do we have good reason to believe it's immortal? Socrates believes in the immortality of the soul. And so, he attempts to defend this position, justify it to his disciples who are worried that it may not be true. It's important to realize--as you read the dialogue, it becomes fairly apparent--that there isn't so much any defense of the belief in the soul. There's some of it, but it's not the primary goal. For the most part, the existence of the soul is just taken for granted in the dialogue. Plato, as a dualist, portrays Socrates as being a dualist and that's just taken for granted. The question that the philosophical discussion turns on is not, "Is there a soul?" but rather, "Does it survive the death of the body? Is it immortal?" Now, as I said, this is Socrates' last day on earth and you'd expect him to be pretty bummed. You'd expect him to be sad. And one of the just striking things is that Socrates is in a very happy, indeed jovial, mood, joking with his friends. Why is that? Well, of course, it's because he thinks, first of all, there's a soul and it will survive and it's immortal. But more importantly still--those are all crucial but there's an extra ingredient as well--he thinks he's got good reason to believe, when he dies he's going to go, basically, to what we'd call heaven. He thinks there's a realm populated by good gods and maybe other philosophical kindred souls. And if you got your stuff together here on life, you'll get to go to that when you die. And so he's excited. He's pleased. Why does he think he's going to go? Well, in thinking about Socrates' belief in the existence of a soul, it's important to understand, it's important to notice, that his take on which stuff gets assigned to the body, what are the bodily things versus what are the soul-like things, is rather different from the way, I think, most of us nowadays would draw the line. When I talked about arguments for the existence of a soul, I said, "Look, here's one possible argument. I see colors. No physical object could, no purely physical object could see colors. I can taste tastes and have the smell of coffee and so forth." But Socrates thinks all those bodily sensations--that's all stuff that the body takes care of. So unlike those modern dualists who think we need to appeal to something immaterial in order to explain bodily sensations, Socrates thinks no, no, the body takes care of all the bodily sensations, all the desirings and the wantings and the emotions and the feelings and the cravings. That's all body stuff. What the soul does--Socrates thinks--the soul thinks. The soul, in its essence, is rational. It takes care of the thinking side of things. What does the soul think about? Well, the soul thinks about all sorts of things, doubtless. But one of the things that it can do, one of the things that sort of provides the underpinnings, as we'll see, for Plato's arguments for the immortality of the soul is the soul can think about--;well, here I'll have to introduce a word of philosophical jargon. Sometimes the idea, sometimes the term is called "ideas." Sometimes the term is called "forms." But the thought is that the soul can think about certain pure concepts or ideas like justice itself, or beauty itself, or goodness itself, or health itself. So to explain all this we need now a sort of crash course in Plato's metaphysics. Obviously, this will be rather superficial. Those of you who would like to know more about it, I recommend reading more Platonic dialogues or taking a class in ancient philosophy. But here's the basic idea. There's all sorts of beautiful objects in the world. Objects can vary in terms of how beautiful they are. But Plato's got the idea that there's nothing in this world that's perfectly beautiful. And yet for all that, we can think about beauty itself. Well, we might put it this way. We might say, ordinary, humdrum, everyday, physical objects are somewhat beautiful. They're partially beautiful. As, sometimes, Platonists put it, they "participate" in beauty. They partake of beauty to varying degrees. But none of them should be confused with beauty itself. Or, take justice. There are various arrangements, social arrangements, that can be just or unjust to varying degrees. But we don't think anywhere in the world there's any society that's perfectly just. Yet for all that, the mind can think about perfect justice. And notice how ordinary empirical social arrangements fall short of perfect justice. So whatever perfect justice is, it's not one more thing in the empirical world. It's something we can think about. It's something that things in the empirical world can participate in or partake of to varying degrees. But we shouldn't confuse the physical things which can be just, the people who can be virtuous to one degree or another, with perfect virtue or perfect justice. That's something that only the mind can think about, that we don't actually have in the world, the empirical world itself. Or take being round. The mind can think about perfect circularity. But no physical object is perfectly circular. There are only things that are circular to a greater or lesser degree. So, by thinking about it, by thinking about these kinds of issues, we can see that the mind has some kind of handle on these perfect, well, we need a word. And as I say, Plato gives us a word, "ideas." Sometimes it's translated as "ideas" or "forms." These things that we can think about that are the template, or at least the standard, or maybe at the very least it's that which the ordinary humdrum things can participate in to varying degrees: perfect justice, justice itself, beauty itself, goodness itself, circularity itself, health itself. All of these things are, as philosophers nowadays call them, Platonic forms. Ordinary material objects of this world can partake of the various Platonic forms, but they should not be confused with the Platonic forms. But we still--even though we don't bump into the Platonic forms in this world--we can think about them. Our mind has a kind of grasp of them. Of course, the problem is, we're distracted by the comings and goings, the hurly burly of the ordinary everyday world. And so we don't have a very good grasp of the Platonic forms. We're able to think about them, but we're distracted. What the philosopher tries to do--this is Socrates' thought, or Plato's thought that he puts in Socrates' mouth--what the philosopher tries to do is free himself from the distractions that the body poses--the desire for food, the craving for sex, being concerned about pain. All this stuff, hungering after pleasure, all this stuff gets in the way of thinking about the Platonic forms. What the philosopher tries to do, then, so as to better focus on these ideal things, is to disregard the body, put it aside, separate his mind as much as possible from it. That's what Socrates says he's been trying to do. And so because of that, he's got a better handle on these ideal forms. And then, he believes, when death comes and the final separation occurs of the mind and the body, his mind gets to go up, his soul gets to go up to this heavenly realm. Philosophers nowadays call it "Plato's heaven." He gets to go up to Plato's heaven where he can have more direct contact with these things, with the forms. Now, I don't have the time here to say enough to try and make it clear why this Platonic metaphysical view is a view that not only is worth taking seriously, but to this day, many, many philosophers think that, at least in it's basic strokes, must be right. But let me at least give you one example that may give you a feel for it. Think of math. Think of some simple mathematical claim like 2 + 2 = 4. When we say that 2 + 2 = 4 or 2 + 3 = 5, we're saying something about numbers that our mind is able to grasp. But what are numbers anyway? They're certainly not physical objects. It's not as though someday you're going to open up an issue of National Geographic where the cover story's going to be "At long last, explorers have discovered the number two." It's not as though the number two is something that you see or hear or taste or could bump into. Whatever the number two is, it's something that our mind can grasp but isn't actually in the physical world. That's the Platonic take on mathematics. There are numbers. The mind can think about them. Things can partake of them. If I were to hold up two pieces of paper, there's a sense in which they are participating in "twohood." But of course, this is not the number two here. If I were to rip these pieces of paper, I wouldn't be destroying the number two. So the number two, the numbers, three, whatever it is, whatever they are, are these Platonic abstract entities that don't exist in space and time. Yet, for all that, the mind can think about them. That's the idea. And it's not a silly idea. It seems like a very compelling account of what's going on in mathematics. What mathematicians are doing is using their mind to think about these Platonic ideas of mathematics. Except Plato's thought was, everything is like that. It's not just math, but justice itself is like that. There are just or unjust things in the world. The mind can think about them, but justice itself--this perfect, this idea of being perfectly just--that's something the mind can think about, but it's not here in the world. It's another abstract Platonic form. So that's the picture. Plato's idea is that if we start doing enough metaphysics, we can see there must be this realm of Platonic ideas, Platonic forms. And we can see that we are able to grasp them through the mind. This can't be a job the body does, because the body's only got its bodily capacities, right? It's able to do the five-senses thing. It's the soul that thinks about the Platonic forms. And as Plato's then going to go on to try to argue, given this picture of what the mind can do, he thinks he can persuade us that the mind, the soul, not only survives the death of your body, but will last forever. It's perfect. It's immaterial and can't be destroyed. It's immortal. So he offers a series of arguments for that conclusion, for that position, and starting next time, we'll work our way through those arguments. |
YaleCourses_Philosophy_of_Death | 26_Suicide_Part_III_The_morality_of_suicide_and_course_conclusion.txt | Professor Shelly Kagan: Last time we turned to questions about the morality of suicide, and I started with two arguments that I called quick and dirty arguments. I suppose it would have been fairer to say that they were really theological arguments, or they were moral arguments that used, in part, theological premises. I suggested that, at least if we look at them in their quick and dirty versions, they were inadequate, and if we're going to make a more careful argument about the morality of suicide, we need to turn to a more systematic view about the contents of morality. We need to look at suicide in terms of the basic moral principles. Now, that's not something we've got the chance to do in detail, but I think we can at least say enough about a couple of basic approaches to the contents of morality, or the basic moral rules, to get the beginnings of an understanding of what might emerge about the morality of suicide if we were to do that more carefully. So, holding off on suicide for the moment, let's ask ourselves, what is it that makes an action morally acceptable or morally forbidden? This is, unsurprisingly, something that different moral theories disagree about. But there's at least one factor or one feature that all, or almost all, moral theories agree about. And that is that the consequences of your action matter. That is, we might or might not think that consequences are the only things that are morally relevant when we think about the morality of your action, but surely it is one thing that's morally relevant--what are the consequences of your action going to be. So, let's think about the morality of suicide with an eye towards consequences, bearing in mind that since we're talking about a moral point of view we need to take into account the consequences as they affect everybody. Now, the person who, of course, is most affected by suicide is, of course, the person who is killing themself. And at first glance it might seem pretty clear that the consequences of suicide are bad for that person. After all, the person was alive and now they're dead, and we normally would take death to be a bad result. If I were to tell you, "Oh, here's a switch on the wall. If you were to flip the switch, a thousand people who would otherwise be alive would end up dead," you would normally take that to be a pretty compelling argument against flipping the switch. Why? Because the result would be bad. Why? Because a thousand people would end up dead. Well, one person ending up dead isn't as bad as a thousand people ending up dead, but for all that shouldn't we still say it's a bad consequence? And as a result of that, shouldn't we say that however far appeal to consequences goes in terms of giving us our moral theory, don't we have to say in terms of consequences, or with regard to consequences, suicide is immoral? But not so quick! Even though it's true that normally death is a bad thing, it's not always a bad thing. This is the sort of thing that we've learned by thinking about what does the badness of death consist in. Typical cases are ones in which the person's dying robs them of a chunk of life that would've been good for them overall, and because of that dying then is bad for them. But in the kinds of cases that we're thinking about, cases where suicide would be rationally acceptable, and we're now asking whether or not it's morally acceptable--in those sorts of cases, at least the kind of paradigm examples that we've been focused on, the person is better off dead. They're better off dead, meaning that what life now holds out for them--although perhaps not negative through and through--is negative on balance. It's negative on balance; they're not better off continuing to live. They're better off dying. And that means, of course, that dying isn't bad for them, but rather good for them, and so their death is not a bad consequence, but rather a good consequence. Provided that you're prepared to accept the possibility of cases in which somebody would be better off if their life ended sooner rather than later, we're led to the conclusion that--from the moral point of view as far as focusing on consequences goes--the consequences might actually be good rather than bad if the person were to kill themself. They will free themself, let's suppose, of the suffering they would otherwise have to undergo. Well, that's--first glance said, consequences says suicide's wrong. Second glance says, consequences says, as least in certain circumstances, suicide's right. Of course, third glance suggests, we can't just focus on consequences for the person who is contemplating suicide. Because from the point of view of morality we have to look at the consequences for everybody. Who else might get affected by the death or suicide of the person? Well, the most obvious people for us to think about at that point then are the family and loved ones--the people who most directly know about and care about the person who is contemplating suicide. And again--I'm running out of glances, but at first glance you might say, well, there the consequences are clearly bad. When the person kills themself that causes, typically, a great deal of distress for the family and friends of the person who has killed themself. Even if that's true, we now have to ask, how do the consequences weigh out? After all, we live in a world in which no single act typically has only good consequences, or no single act has bad consequences and only bad consequences. Often our choices are mixed packages where we have to ask whether the good that we can do is greater than the bad that we'd be doing with this act or that act or some third act. Even if there are, then, negative consequences in terms of distress to the family, friends, and loved ones, of the person who kills themself, that might still be outweighed by the benefit to the person himself or herself, if it was really the case that he or she would be better off dying. But it's also worth bearing in mind that insofar as we're thinking about people who love and care about the person who is considering dying, then they may actually overall, on balance, be relieved that the suffering of their loved one has come to an end. We will, of course, all be horribly distressed that nature, or the Fates, or what have you, has brought it about that this person's choices are now reduced to killing themself on the one hand, or continuing the terminal stages of some illness where they're incapacitated and in pain. We will, of course, wish there was a serious prospect of a cure, some chance of recovery, wish they'd never gotten ill in the first place. But given the limited choices, continued suffering and pain, on the one hand, or having an end to that suffering and pain, if the person can rationally assess their prospects and reasonably come to believe they're better off dead, then that's a judgment their loved ones can come to share as well. They may well regret the fact--more than regret, curse the fact--that these are the only choices they've got, but still, given the limited choices they may agree, they may come to agree, better to put an end to the suffering. And so when the person kills themself, they may second that choice. They may say, "At least they're not in pain and agony anymore." So, if we look at it from the point of view of consequences--in fact, suppose we had a moral view that said consequences aren't just one thing that was morally relevant in thinking about what makes an action right or wrong. Suppose we took the bold claim that consequences are the only thing that's morally relevant. There are moral views that take this position. I suppose the best-known example of this kind of consequence-only approach to morality is utilitarianism. Utilitarianism is the moral doctrine that says right and wrong is a matter of producing as much happiness for everybody as possible, counting everybody's happiness equally. And when you can't produce happiness, then at least trying to minimize the misery and suffering, counting everybody's misery and suffering equally. So, suppose we accept this utilitarian position. What conclusions would we come to then about the morality of suicide? I suppose the conclusion would be a kind of moderate one. On the one hand, we'd be rejecting the extreme that says suicide is never morally acceptable, because to say that, you'd have to be claiming suicide always has bad consequences overall. And that strikes me, although it's an empirical claim, it strikes me as a rather implausible empirical claim. It's, sadly enough, not too difficult to describe cases in which the results may actually be better if the person kills themself rather than having their suffering continue. It may be better for them and better for their family. On the other hand, we certainly wouldn't want--if we were utilitarians--we also wouldn't want to go to the other extreme and say suicide is always morally acceptable, because, of course, to say that it's always morally acceptable is to say that the consequences are never bad when you kill yourself. And that's also pretty obviously an implausible thing to claim. You guys are young, you're healthy, you've got a great future in front of you. If you were to kill yourself, the results wouldn't be good. The results would be worse overall than if you had refrained from killing yourself. So, the utilitarian position is in the middle. It doesn't say suicide's never acceptable, doesn't say suicide is always acceptable. It says, perhaps unsurprisingly, it's sometimes acceptable; it depends on the facts. It depends on the results. It depends on comparing the results of this action, killing yourself, to the alternatives open to you. We have to ask, is your life worse than nothing? Is there some medical procedure available to you that would cure you? If there is, and even if your life is worse than nothing, that still doesn't make it the best choice in terms of the consequences. Getting medical help is a preferable choice in terms of the consequences. We can even think of cases where your life is worse than nothing, you'd be better off dead, and there is no medical alternative of a cure available to you, but for all that, it still isn't morally legitimate to kill yourself in terms of the utilitarian outlook. Because, as always, we have to think about the consequences for others. And there may be others who'd be so adversely affected by your death that the harm to them outweighs the cost to you of keeping yourself alive. Suppose, for example, that you're the single parent of young children. You've got a kind of moral obligation to look after them. If you were to die, they'd really have it horribly. It's conceivable then, in cases like that, the suffering of your children, were you to kill yourself, would outweigh the suffering that you'd have to undergo were you to keep yourself alive for the sake of your children. So, it all depends on the facts. Still, if we accept the utilitarian position, we do end up with a moderate conclusion. In certain circumstances suicide will be morally justified--roughly speaking, in those cases where you're better off dead and the effects on others aren't so great as to outweigh that. Those will be the paradigm cases in which suicide makes sense or is legitimate, morally speaking, from the utilitarian perspective. But of course, that doesn't mean that suicide is indeed ever morally legitimate. Because we don't necessarily want to embrace the utilitarian theory of morality. Utilitarianism is what you get, roughly speaking, when we say consequences matter and they're all that matters. But most of us are inclined to think that there's more to morality than consequences. Most of us are inclined to think that there are cases in which actions can have bad results--rather, actions can have good results and yet, for all that, be morally forbidden. Or actions could have bad results and yet, for all that, still be morally required. That's not to say that consequences don't matter morally; it's to claim, rather, that consequences aren't the only thing that matters morally. Consequences can be outweighed by other morally relevant factors. Well, that's the position that's held by the branch of moral theory known as deontology. So deontologists say other things matter morally besides consequences. In deciding whether your action is right or wrong, you have to pay attention to the consequences, but you have to pay attention to other things as well. What other things? Well, unsurprisingly, this is an area then in which different deontologists will disagree one to the next in terms of what else they want to add to the list of morally relevant factors. But there's one kind of additional factor that most of us in our deontological moods would want to add to the list, and that's this--so one, at any rate, that's relevant I think, most directly relevant for thinking about suicide. That factor is the factor of not just what was the upshot of your action but how you produced that upshot; not just what the results were, but what was your means of getting those results and more particularly still, did you have to harm anybody to produce the results? Most of us are inclined to think it's wrong to harm people, or at least innocent people. It's wrong to harm innocent people even if the results of doing that might be good. Now, I threw in the qualification about innocent people because, of course, it's also true that most of us are inclined to think that self-defense might be justified. Harming people who are attacking you or your friends or your fellow countrymen--that may be legitimate. And so it's not as though we want to say it's never legitimate to harm somebody. But those people are guilty; they're aggressors. What most of us in our deontological moods are inclined to think is it's never legitimate to harm an innocent person. And the crucial point is that's true even if the results would be better. Look, there's no debate between deontologists and utilitarians about harming innocent people in the normal case, because normally of course--you know, suppose I, to make an example--to end the class with a nice big bang, right--I brought my Uzi sub-machine gun. I now take it and go rat-a-tat-tat, killing 15 of you. Well, that would not be something that would have good results. And so, clearly, the utilitarian is going to reject that as well as the deontologist. They're in agreement about that. In the typical case, killing an innocent person has bad results, harms them. It's wrong, full stop, we're done. But what should we say about cases where killing an innocent person has better results? In real life, it's hard to think of cases like that, but we can at least go "science-fictiony" and tell an example. So, here is one of my favorite examples in moral philosophy. Suppose that we have five patients in a hospital who are going to die because of organ failures of one sort or another. One of them needs a heart transplant, one of them needs a kidney transplant, one of them needs a liver transplant, and so forth and so on. Unfortunately, because of tissue incompatibilities, even as they begin to die we can't use the organs from the ones that have died to save the others. Meanwhile, here in the hospital for a routine check-up is John. John's perfectly healthy. And as you're doing your exams on him you discover that he's exactly suitable to be an organ donor for all five of the patients. And it occurs to you that if you were to find some way to kill him, but cover up the cause of death so it looked like he died of some unexpected freak seizure, you could then use his organs to save the five. This one gets the kidney, that one gets the other kidney, that one gets the heart, that one gets the liver, and that one gets the lungs. So your choice, roughly, is this. Just give John his routine medical exam, in which case the five other patients die, or chop up John, kill him and chop him up, using his organs to save the five patients. Well, what should we say is the right thing to do the organ transplant case? In terms of consequences it looks as though, if we tell the story right at least, the results would be better if we chop up John. After all, it's one versus five. And although the death of John is a horrible bad result, the death of the five is a horrible bad result. And so the results would be better if we were to kill innocent John. Well, if we had more time we could argue about are the results really going to be better, is that a realistic story--what have you--are there other long-term effects on the healthcare profession that we haven't taken into account? But we don't have time to really pursue this story in detail. Let's just suppose we could eventually get the details right; the results really would be better if we chopped up John. Is that the right thing to do? Well, maybe utilitarianism says it's the right thing to do, but it's precisely for that reason that most of us would then say, you know, there's more to morality than what utilitarianism says. Now, whether that objection is a good one is a very, very complicated question, and if you want--if you'd like to pursue it--if you want to pursue it--then I invite you to take an introductory class in moral philosophy. For our purposes, let's just suppose that most of us are on board with the deontologists when they say there's more to morality than what the utilitarian has, and this example brings it out. It's wrong to kill somebody who is innocent even though by hypothesis the results would be better--it's five to one. People have a right to life, a right not to be killed. And that right weighs in when we're deciding what to do morally, so that it's wrong to kill an innocent person even if the results really would be better. All right, let's suppose we agree with that--accept that. Again, in a fuller class on moral philosophy we'd have to ask ourselves what is the basis of that right, what other deontological rights do people have, what exactly are the contours of that right? But here we can just ask, suppose we accept a right like that, what are the implications of that for the morality of suicide? And now, it seems what we have to say is, suicide is wrong. Suicide is morally unacceptable. Because when I kill myself, well, I'm killing somebody. And didn't we just say as deontologists that killing an innocent person--and I'm an innocent person--killing an innocent person is morally wrong? Well, I'm a person. So, killing me is morally wrong. And it's not really any help to come back and say, but look, we've stipulated that this is a case where the person is better off dead. The results will really be better overall if he kills himself. Yeah, that's right. Maybe that is right. It doesn't matter--because as deontologists we said the right to life is so powerful it outweighs consequences. Just as it was wrong to chop up John, even though the results would be better--five versus one--it's wrong to kill yourself, even if the results would be better. Even if that's the only way to put yourself out of pain, and those are good results, it doesn't matter. The right to life outweighs the appeal to consequences. So as deontologists, it seems, we have to say suicide is forbidden--full stop. Well, as usual in philosophy, it's not quite as simple as that. One possible response somebody might make is, but look, morality is only about how I treat others. It's not about how I treat myself. And if we were to accept that claim, then we could say the right to life only covers how I treat others. In particular, it rules out my killing other people even when the results would be good. But it doesn't have any implications for how I treat myself. And in particular then, if the right to life doesn't exclude self-killing, well then, suicide is acceptable. That's a possible moral view, but I find it rather implausible. If we were to start to explain what it is about you that explains why it's wrong for me to kill you, we'd start saying things about how, well, you're a person and, as such, you've got all these plans and so forth and so on. And as a person, you've got certain rights, certain things that shouldn't be done to you. You're not just, --This is the thought that lies behind much deontological thinking, right? People aren't objects. We can't just destroy them for the sake of better results. Well, that's right; people aren't objects. But of course, I'm a person too. And so when I contemplate killing myself, I'm contemplating destroying a person. So, it's at least difficult to see why we would accept the claim that morality only governs how I treat other people. It seems--although the issue is a complicated one, which we don't have time to pursue further today--it seems to me more plausible to say morality includes rules not only governing how I treat others but also how I treat myself. Yet, if that's right, and if among the moral rules are a right to life, a prohibition against harming people, then don't we have to say, look, it's wrong from the deontological perspective to kill yourself. Well, of course, the natural response to this line of thought is to say, but look, when I kill myself--unlike the case of chopping up John to save five others--when I kill myself, I'm doing it for my own sake. I'm harming myself for my own sake. That seems highly relevant in thinking about the morality of suicide. It does seem relevant, though it's not 100 % clear what to do with that thought. Here are two possible interpretations of that thought. First of all, you might think that the relevance of saying that I'm harming myself for my own sake is this. If I'm harming myself for my own sake, what I'm saying is, despite the fact that I'm harming myself, I'm better off. After all, we stipulated that we were focusing on cases in which suicide was rational. So, the person is better off dead. If they're better off dead, then although it's certainly true that there's a sense in which they're harming themself--I mean killing yourself is doing harm to yourself--still it's not harm overall. The bottom line, we were imagining, is positive when you kill yourself. And so, although, unlike the case of John where you've harmed him and benefited others--so you have harmed him overall--in the case of suicide, when I harm myself to avoid the suffering I would otherwise go through, I'm not really, as we might say, harming myself overall. So, perhaps the deontological prohibition against harm is really a prohibition against harming people overall. Look, you've got some sort of a disease in your--infection in your leg that has now spread and it's going to kill you unless we amputate your leg. So, you go into surgery and the surgeon chops off your leg. Has he done something immoral? It doesn't seem as though he has. But after all, he chopped off your leg! He harmed you! You used to have a leg and now you don't have one. Well, what we want to say is he didn't harm you overall. He harmed you in such a way that it was the only way to leave you better off bottom line, and that's not a violation of the rule against harming. At least, that's a possible thing to say. And if that's the right thing to say, then maybe that's what we should say about the suicide case yet again. Yeah, there's a deontological prohibition against harming innocent people, but what it's really a prohibition against, is leaving them worse off overall. And when I kill myself, I'm not leaving myself worse off overall. And if that's right, then even from the deontological perspective suicide may be morally legitimate. Well, that's at least one possible way to carry out the deontological stand, one possible way of interpreting the remark, "But look, when I kill myself, I'm doing it for my own benefit." Here's another possible way of interpreting that thought. When I kill myself, given that I'm doing it for my own benefit, I've obviously got my own agreement. I can't kill myself against my will. Suicide is something you do to yourself. And so, I have my own consent to what I'm doing. That seems pretty important. Notice how different it is from the case of John. When I chopped up John, I imagine I don't have John's approval. Consent seems to be present in the case of suicide but not in the case of chopping up John. Maybe that's morally relevant as well. Now, to accept that view is, of course, to say we need to add yet another factor into our deontological theory. We have consequences, we have harm doing, but we also have the factor of consent. And so we need to think about the moral relevance of having the consent of the victim. And once we start thinking about that, I think most of us would be inclined to accept the conclusion that consent can make it acceptable to do to someone what would normally be wrong in the absence of their consent. By the by, you'll notice that that seems to be one of the things that's relevant in thinking about the surgery case, not the organ transplant case but the performing the amputation of the leg, to save the person who would otherwise die. Surely it seems relevant that the patient has given you permission to operate on them. Here's another example that shows you the relevance of consent. It would not be okay--it would not be morally acceptable for me to go up and hit you in the nose. Just like it wouldn't be okay for you to go up and hit me in the face or the gut. And yet, boxing matches are, I suppose, morally acceptable. Why is that? Because from a deontological perspective the answer is, when people are boxing they've agreed to it. I give you permission to hit me, or at least to try to hit me, in exchange for your giving me permission to hit you, or at least to try to hit you. And it's the presence of that consent that makes it permissible for you to harm me, assuming that you're a better boxer than I am, which I'm confident would have to be the case. So, consent makes it legitimate to harm people, even though in the absence of consent it wouldn't be legitimate. All right, if that's right, then bring that thought home to thinking about the case of suicide. Suicide might be wrong, because after all I'm a person, at first glance. But since I'm killing myself, I've given myself permission. I've given myself consent to harm myself. And if consent makes it permissible to do what would normally be forbidden, then consent makes it permissible for me to kill myself. And so, now we're led again to the conclusion that from a more fully developed deontological perspective we ought to say suicide is permissible, at least if we're prepared to throw in this kind of factor of consent and think that it can just wipe out the protections that would otherwise normally be in place. Indeed, if we think that, we're going to be led to a rather bold and extreme conclusion about the morality of suicide. The person has killed himself, so he's clearly consented, and so in every case what he's done is acceptable. Well, maybe that's right--if we're prepared to go that far with the principle of consent. But maybe we shouldn't go that far with the principle of consent. Suppose we're talking after class and you say to me, "Shelly, you've got my permission to kill me." And so I get out my gun and I shoot you to death. It doesn't seem morally acceptable, even though you gave me your permission, especially--Think of even weirder cases. Suppose that you are feeling like you want to killed because you're overcome with guilt because you believe you killed John Smith. But you're crazy. You didn't kill John Smith. John Smith's not even dead. But in your insanity you think you did do it, and so you say, "Shelly, please kill me." And I know that you're insane, but hey, you know, consent's consent, and so I kill you. Well, that clearly isn't acceptable. Or suppose you're playing with your three-year-old nephew. He says, "Oh yeah, I don't really like being alive. Kill me." Well, that clearly doesn't make it acceptable to kill him or her--well, nephew, it's a him. So, if we start accepting this consent principle, we're led to some pretty implausible conclusions. So, maybe we should throw it out. Maybe we should say, no, consent really doesn't have the kind of power that a minute ago it looked like it did. But I'm inclined to think we shouldn't go that far and throw away the consent principle altogether. Because if we do throw out the consent principle, we're going to find ourselves unable to say some things that I think it's pretty important to us to say. Consider the following example. Suppose that we're in war and we're in the foxhole and a hand grenade has been thrown into the foxhole. And unless something happens quick, the hand grenade is going to blow up and it will kill my five buddies who are near the hand grenade. Unfortunately, because they're playing cards or whatever, they don't see it. But I see it. But I don't have time to warn them. By the time I tell them what's going on, they won't have time to react. Really, it's do nothing, let them get killed but I probably won't be hurt very much, or throw myself on the hand grenade, my body absorbs the blow, saves my buddies, kills me. Imagine what happens is that I throw myself on the hand grenade. I've sacrificed myself for them. I've done something amazing. Few of us would have it within ourselves to do this, but amazingly enough some people do. And we admire and praise these people. They've committed--they've undertaken an incredible act of heroic self-sacrifice--morally commendable, above and beyond the call of duty we want to say, praiseworthy. But wait a minute, how could it be praiseworthy? The person threw himself on a hand grenade, knowing the result of this was that he was going to die. And so he killed a person, thereby, apparently, violating the deontological right not to have innocent people be killed. Don't talk about "the results are better." Yeah, of course, five buddies saved; the results are better. But that doesn't seem enough to use in our deontological moods. After all, suppose that I see the hand grenade, and so what I do is I take Jones and throw him on the grenade. Well, that's not okay, even though the results are the same. What makes the difference? Why is it morally legitimate for Jones to throw himself on the grenade? The only answer that I can see is, because he agrees to it. He did it to himself; he volunteered, it has his consent. If we throw away the consent principle, we're forced to say what Jones did isn't morally admirable. It's morally appalling, it's morally forbidden. I can't believe that. So, we need a consent principle. But on the other hand, we don't want to go with such a strong consent principle that we say, oh, it's okay to kill crazy people, or kill children, just because they say, "Oh, kill me." So we need something--a more moderate form of the consent principle. We need to say consent can do its thing, but only under certain conditions. What exactly are the relevant conditions? Well, this is, of course, one more topic open for debate. We might insist that, look, the permission has got to be given freely. It's got to be given knowing what the upshots are going to be. It's got to be given by somebody who is sane, who is rational, who is competent, who's--and that may deal with the child case as well, who is not yet competent to make this sort of decision. There's room for disagreement about what exactly are the relevant conditions to put into a proper version of the consent principle. We might also want to throw in some requirement that the person have good reasons for his giving you permission. That might deal with the case where you just come up to me after class and say, "Kill me." I mean you're not insane. Well, at least you might not be insane. You know what's going to happen. In some sense, you've reached the age of competence, but you don't have any good reasons for it. Maybe that's enough to undermine the force of consent. Well, suppose we've got some kind of modified consent principle. What should we say about suicide then? Well, it seems to me what we're led to is, once again, a modest view about suicide. The mere fact that the person killed themself won't show that it was morally legitimate because, of course, even though they've given themself permission, they may not have had, for example, good reason, or they might be insane. But for all that, if we can have cases--and I take it we can have cases--where somebody rationally assesses their situation, sees that they're better off dead, thinks the case through, doesn't rush into it, makes an informed and voluntary decision, with good reason behind it--in a situation like that it seems to me the consent principle might well come into play, in which case consent will trump or nullify the force of the deontological prohibition against harming innocent people. So, suicide will again be acceptable in some cases, though not in all. And that's the conclusion that seems to me to be the right one, whether we accept the utilitarian position or one of these deontological positions. Suicide isn't always legitimate, but it's sometimes legitimate. It still leaves the question, what should we do when we see, when we come across, somebody trying to kill themself?" And there I think there is good reason to ask yourself, are you confident that the person has satisfied the conditions on the consent principle? Perhaps we should err on the side of caution, and assume that the person may be acting under distress, not thinking clearly, not informed, not altogether competent, not acting for good reasons. But to accept that is not to accept the stronger conclusion that we must never permit somebody to kill themself. If we become convinced that they have thought it through, that they do have good reason, that they are informed, that they are acting voluntarily, in some such cases it may be legitimate for them to kill themself, and for us to let them. All right, almost out of time. So, let me shift gears for the very last time, and take a quick look at where we've been. At the start of the semester, I invited you to think hard about the nature of death or the facts about life and death. Most of us try very hard to not think hard about death. It seems to be an unpleasant topic, and we put it out of our mind. We don't think about it, even when there's a sense in which it's staring us in the face. Every single class of this semester, every single day of this semester, you've come into this building and have walked past a cemetery right across the street. How many times did you notice it? How many times did you stop to think about the complete visual reminder that we are on this Earth for a while, and then we're not anymore? Most of us just don't think about it. Well, of course you are, in some sense, the exceptions. You've spent a semester thinking about it, and I'll be largely content if you've taken the opportunity this semester to take a hard look at the things you believe. Whether or not you ended up agreeing with me, about the various claims that I've put forward, is less important than that you've taken the chance to take a hard look at your beliefs and asked yourself not just what you hoped or wished or kind of believed was true, but what you could actually defend. Still, having said that, it would be disingenuous of me to pretend that I don't also hope that you've come around--if you didn't start out believing what I believe--that you've come around to believing what I believe. As I pointed out on the first day, most people accept a great deal of this package of beliefs about the nature of life and death, that--They believe we have a soul, that there's something more to us than our bodies. And they believe that because they think, given the existence of a soul, we'll have the possibility of living forever. Immortality is a possibility, and we all hope for and crave the possibility that we will live forever because death is, and must be, horrible. It's so horrible that we try not to think about it. It's so horrible that when we do think about it we're filled with dread, terror and fear. And it's just obvious that that's the only sensible reaction to the facts about life and death. Life is so incredible that under no circumstances could it ever make sense to be glad that it had come to an end. Immortality would be desirable; suicide could never be a reasonable response. Over the course of this semester, I've argued that that package of beliefs, common as it may be, is mistaken, virtually from start to finish. There is no soul, we are just machines. We're not just any old machine; we are amazing machines. We are machines capable of loving, capable of dreaming, capable of being creative, capable of making plans and sharing them with others. We are people. But we're just machines anyway. And when the machine breaks, that's the end. Death is not some big mystery which we can't get our heads around. Death is in some sense no more mysterious than the fact that your lamp can break, or your computer can break, or any other machine will eventually fail. I never meant to claim that it's not regrettable that we die the way we do. As I argued when talking about immortality, better still would be if only we had the prospect of living as long as life still had something left to offer us. As long as life would be good overall, death is bad, and I think for most of us death comes too soon. But having said that, it doesn't follow that immortality would be a good thing. On the contrary, immortality would be a bad thing. The reaction that makes sense in thinking about the facts of death is not to find it as some great mystery too dreadful to think about, too overwhelming. But rather, fear, far from being the rationally appropriate response I think, is an inappropriate response. Although we can be sad that we die too soon, that perhaps should be balanced by the fact of--the recognition of--just how incredibly lucky we all are to have been alive at all. Yet, at the same time, recognizing that sense of luck and being fortunate doesn't mean that we're always lucky to be remaining alive. For some of us the time will come in which that's no longer true, and when that happens life is not something to be held onto, come what may, under any and all circumstances. The time could come for some of us in which it's time to let go. What I then invited you to do, over the course of the semester, is not only to think for yourself about the facts of life and death, but I invite you all to come to face death without fear and without illusion. Thanks very much [applause]. |
YaleCourses_Philosophy_of_Death | 15_The_nature_of_death_cont_Believing_you_will_die.txt | Professor Shelly Kagan: Last time we ended with the following puzzle or question. If we say that to be a person is to be a P-functioning body, it seems then as though we have to conclude that when you're not P-functioning, you're dead. That is, you're dead as a person. Previously, we distinguished between the death of my body and my death as a person; let's focus on my death as a person. If I'm not P-functioning, do we have to then say I'm dead? Well, that may seem to be the most natural way to define death, but it's not an acceptable approach. Because it would follow then, that when I'm asleep, I'm dead. Well, not during those times, perhaps, when I'm dreaming while I'm asleep. But think of the various periods during the night in which you are in a deep, deep dreamless sleep. You're not thinking. You're not planning. You're not communicating. Let's just suppose, as seems likely, that none of the P-functioning is occurring, at some point during sleep. Should we say then that you're dead? Well, that's clearly not the right thing to say. So we need to revise our account of what it is on the physicalist picture to say that you're dead. What is it to be dead? It can't just be a matter of not P-functioning. Well, one possibility would be to say, the question is not whether you are P-functioning. It's okay if you're not P-functioning, as long as your not P-functioning is temporary. If you will P-function again, if you have been P-functioning in the past and you will be P-functioning again in the future, P-functioning for person functioning, you will be P-functioning again in the future, then you're not dead. Well, that's at least an improvement, because then we say, look, while you're asleep, even though there's no P-functioning going on, the lack of P-functioning is temporary, so you're still alive. But I think that won't quite do either. Let's suppose that come Judgment Day, God will resurrect the dead. And let's just suppose the correct theory of personal identity is such as to put aside any worries we might have along with van Inwagen, that we discussed previously, as to whether or not on resurrection day that would really be you or not. Suppose it would be you. So God will resurrect the dead. Judgment Day comes. The dead are resurrected. Well, now they're P-functioning. So it turns out that during that period in which they were dead, they were only temporarily not P-functioning. But if death means permanent cessation of P-functioning, then it turns out the dead weren't really dead after all. They were only temporarily not P-functioning, just like we are temporarily not P-functioning when we're asleep. Well, that doesn't seem right either. On Judgment Day, God resurrects the dead. It's not that He simply wakes up those in a deep, deep sleep. So the proposal that death is a matter of permanent cessation of P-functioning versus temporary, that doesn't seem like it's going to do the trick. But what else do we have up our sleeves? Here's a different proposal that I think is probably closer to the right account. We might say, look, while you're asleep, it's true that you're not P-functioning. For example, you're not doing your multiplication tables. But although you are not engaged in P-functioning, it does seem true to say that you still can P-function. You still could do your multiplication tables. Although it's not true that you are speaking French--let's suppose that you know how to speak French--it's still true of you while you're asleep that you can or could speak French. How do we know this? Well, all we have to do is just wake you up. We wake you up and we say, "Hey John, what's three times three?" And after you stop swearing at us, you say, "Well, it's nine." Or we say, "Linda, hey, conjugate such and such a verb in French." And you can conjugate it. Even though you were not engaged in P-functioning while you were asleep, it's still true that while you were asleep, you had the ability to engage in P-functioning. Abilities aren't always actualized. Your P-functioning is actualized now, because you're engaged in thought, but you don't lose the ability to think during those moments when you're not thinking. Suppose we say then that to be alive as a person is to be able to engage in P-functioning. And to be dead then, is to be unable to engage in P-functioning. Why are you unable? Well, presumably because whatever cognitive structures it takes in your brain to underwrite the ability to P-function, those cognitive structures have been broken, so they no longer work. It's--When you're dead, your brain is broken. It's not just that you're not engaged in P-functioning, you're no longer able to engage in P-functioning. That, at least, seems to handle the case of sleep properly. Although you're not engaged in P-functioning, you're able to, so you're still alive. Take the dead who will be resurrected on Judgment Day. Although they will be engaged in P-functioning later on, it's not true right now that they can engage in P-functioning. Their bodies and brains are broken until God fixes them. So they're dead. All right, that seems to give the right answer and, in fact, it gives us some guidance how to think about some other puzzling cases. Take somebody who is in a coma, not engaged in P-functioning. Their body, let's stipulate, is still alive. Their heart's still beating, the lungs are still breathing and so forth. But we wonder, is the person still alive? Does the person still exist? Well, they're not engaged in P-functioning. That's pretty clear. We want to know, can they engage in P-functioning? Now, at this point we'd want to know more about the underlying mechanics about what's gone on in the case of the coma. If the following is the right description, then we perhaps should say they're still alive. Look, when somebody's asleep, we need to do something to, in effect, wake them up, something to turn the functioning back on. The cognitive structures are still there, but the on-off switch is switched to off. Perhaps that's what it's like when somebody's in a coma, or perhaps at least certain types of comas. Of course, to turn the on-off switch on is harder when somebody's in a coma. It's a bit more--to continue with the metaphor of the on-of switch--as though not only is the switch turned to off, there's a lock on the switch. And so we can't turn the switch on in the normal way. Pushing the person in the coma and saying, "Wake up, Jimmy" doesn't do the trick. But for all that, although the on-off switch may be stuck in off, if the underlying cognitive structures of the brain are such as to still make it true that, flip the on switch back to on and the person can still engage in cognitive P-functioning, maybe the right thing to say is the person's still alive. Coma case two. I'm not sure whether this really should be called a coma. I don't know the biological and medical details. But imagine that what's gone on is there's been decay of the brain structures that underwrite the cognitive functioning. So now it's not just that the on-off switch is stuck in off, the brain's no longer capable of engaging in these higher order P-functions. This might be a persistent vegetative state with no possibility of turning it on, even in principle. Of such a person we might say, they're no longer capable of P-functioning. And then perhaps the right thing to say is the person no longer exists, so they no longer exist as a person, even if the body is still alive. So far, so good. Here's a harder case to think about. Suppose we put somebody in a state of suspended animation, cool their body down so that the various metabolic processes come to an end. They stop. As I'm sure you know, we're able, with various lower organisms, to put them in a state of suspended animation and then, the amazing thing is, if you heat them back up again properly, they start functioning again. Now, we can't do that yet with humans. But it doesn't jump out at us, at least, that that should be an impossibility. So suppose we eventually learn how to do this with humans. And now, suppose we take Larry and put him in a state of suspended animation. Is he dead? Well, most of us don't feel comfortable saying that he's dead. Just like we don't feel comfortable saying that the--I suppose we could do this with a fruit fly. I don't know whether we can or can't. Suppose we can. Suppose we do it with a fruit fly. We don't feel comfortable saying the fruit fly's dead. Rather, it's in a state of suspended animation. Well, similarly then, perhaps we wouldn't want to say that Larry is dead. And the "brokenness" account of death allows us to say Larry's not dead. The structures in the brain which would underwrite the ability to engage in P-functioning, they're not destroyed by suspended animation. So perhaps in the relevant sense, the person can still engage in P-functioning, so they're not dead. Good enough. On the other hand, it doesn't seem so plausible, it doesn't seem intuitively right, to say that they're alive. Is Larry alive when he's in a state of suspended animation? No. It seems like he's not alive either. Now that's a bit puzzling, right? It's as though we need--Normally, we think that look, either you're alive or you're dead. The two possibilities exhaust the possibilities. But thinking about suspended animation suggests that we may actually need a third category, suspended--neither alive nor dead. Well, all right, if we do introduce a third possibility--I'm not sure this is the right thing. It's not clear what's the right or best thing to say about suspended animation. But at least that doesn't seem like an unattractive possibility. If there are three possibilities--dead, alive, or suspended--to be dead, we could still say you've got to be broken, incapable of P-functioning. Suspended isn't broken. It's just suspended. But then what do you need to be alive? In addition to not being broken, what do you need to be alive? Well, the initially tempting thing to say is not only aren't you broken, but you're actually engaged in P-functioning. But if we say that, then we're back to saying that somebody who's asleep isn't really alive. That doesn't seem right either. So we need some account to distinguish between suspended animation and out and out being alive. And I'm not quite sure how to draw that line. So I'll leave that to you as a puzzle to work on on your own. That puzzle aside, it seems to me that once we become physicalists, there's nothing especially deep or mysterious about death. The body is able to function in a variety of ways. When some of those lower biological functions are occurring, the body's alive. When all goes well, the body is also capable of engaging in higher order personal P-functioning. And then you've got a person. The body begins to break, you get the loss of P-functioning. At that point, you no longer exist as a person. When the body breaks some more, you get the loss of biological or B-functioning, and then the body dies. There's nothing especially mysterious about death, although there may be a lot of details to work out from a scientific point of view. What are the particular processes that underwrite biological functioning? What are the particular processes that underwrite personality or person functioning? Still, there are a couple of claims about death that get made frequently enough, about death being mysterious in one way or another, that I want--or special or unique--that I want to focus on. In effect, from the physicalist point of view, although death is unique because it comes at the end of this lifetime of various sorts of functions, there's nothing especially puzzling, nothing especially mysterious, nothing especially unusual or hard to grasp about it. But there are a handful of claims that people make about death suggesting that they think, and they think we all think, that death is mysterious or unique or hard to comprehend. I want to examine a couple of these. One of them I'll get to later; if not later today, then next lecture. Sometimes people say that we die alone or everybody dies alone. And this is something--This is supposed to express some deep insight into the nature and uniqueness of death. So although we're able to eat meals together, we're able to go on vacations together and take classes together, death is something we all have to do by ourselves. That's the claim. We all die alone. That's a claim I'll come back to. What I want to look at first is the suggestion that somehow, at some level, nobody really believes they're going to die at all. Now, having distinguished between what we've called the death of the body and the death of the person, the question whether or not you're going to die needs to be distinguished. The question whether or not you believe you're going to die needs to be distinguished. If somebody says, "You know, nobody really believes they're going to die," they could mean one of two things. They could mean nobody really believes they're going to cease to exist as a person, first possibility. Second possible claim, nobody really believes they're going to undergo the death of their bodies. Let's take these in turn. Is there any good reason to believe that we don't believe that we're going to cease to exist as a person? Well, the most common argument for this claim I think takes the following form. People sometimes say, since it's impossible to picture being dead, it's impossible to picture being dead--, That is to say, it's impossible to picture your own being dead. Each one of us has to think about this from the first person perspective or something like that. Think about your dying, your being dead--Since that's impossible to picture, that's impossible to imagine, nobody believes in the possibility that they're going to die, that they're going to cease to exist. The idea seems to be that you can't believe in possibilities that you can't picture or imagine. Now, that hypothesis, that thesis, that assumption, could be challenged. I think probably we shouldn't believe the theory of belief which says that in order to believe in something, you've got to be able to picture it or believe it. But let's grant that assumption for the sake of argument. Let's suppose that in order to believe in something, you've got to be able to picture it. What then? How do we get from there to the conclusion that I can't believe that I'm going to die, I'm going to cease to exist as a person? Well, the thought, of course, is I can't picture or imagine my death. I can't picture or imagine my being dead. It's important here to draw some distinctions. I can certainly picture being ill. There I am on my deathbed dying of cancer, growing weaker and weaker. I can perhaps even picture the moment of my death. I've said goodbye to my family and friends. I've the--Everything's growing greyer and dimmer. It's growing harder and harder to concentrate. And then, well, and then there is no "and more." The claim, however, is not that I can't picture being ill or dying. The claim's got to be, I can't picture being dead. Well, try it. Try to picture being dead. What's it like to be dead? Sometimes people claim it's a mystery. We don't know what it's like to be dead, because every time we try to imagine it, we fail. We don't do a very good job. I'm inclined to think that that way of thinking about the question is really confused. You set yourself the goal of trying to put yourself in the situation imaginatively of what it's like to be dead. So I start by trying to strip off the parts of my conscious life that I know I won't have when I'm dead. I won't hear anything. I won't see anything. I won't think anything. And you try to imagine what it's like to not think or feel or hear or see. And you don't do a very good job of it. So you throw your hands up and you say, "Oh, I guess I don't know what it's like." So it must be a mystery. It's not a mystery at all. Suppose I ask, "What's it like to be this cell phone?" The answer is, "It's not like anything," where that doesn't mean there's something that it's like to be a cell phone, but different from being anything else. So it's not like anything else; it's a special way of feeling or experiencing. No. Cell phones don't have any experience at all. There is nothing that it's like on the inside to be a cell phone. Imagine that I try to ask myself, "What's it like to be my ball point pen?" And I try to imagine, well, first, imagine being really, really stiff, because you're not flexible when you're a ball point pen. You can't move. And imagine being really, really bored, because you don't have any thoughts or interests. No. That's completely the wrong way to go about thinking what it's like to be a ball point pen. There's nothing that it's like to be a ball point pen. There's nothing to describe, nothing to imagine. No mystery about what it's like to be a ball point pen. No mystery about what it's like to be a cell phone. Well, similarly then, I put it to you, there's no mystery about what it's like to be dead. It isn't like anything. What I don't mean, "Oh, it's like something, but different from everything else." I mean, there is nothing there to describe. When you're dead, there's nothing happening on the inside to be imagined. Well, should we conclude therefore, given that we've got the premise, "If you can't picture it or imagine it, then you can't believe in it," since I've just said, look, you can't imagine being dead, but that's not due to any failure of imagination, that's because there's nothing there to imagine or picture. Still, granted the premise, if you can't picture it or imagine it, you can't believe in it--Should we conclude, therefore, that you can't believe you're going to be dead? No. We shouldn't conclude that. After all, not only is it true that you can't picture from the inside what it's like to be dead, you can't picture from the inside what it's like to be in dreamless sleep. There is nothing that it's like to be in dreamless sleep. When you're in dreamless sleep, you're not imagining or experiencing anything. Similarly, it's not possible to picture or imagine what it's like to have fainted and be completely unconscious with nothing happening cognitively. There's nothing to picture or imagine. Well, should we conclude, therefore, so nobody really believes that they're ever in dreamless sleep? Well, that would be silly. Of course you believe that at times you're in dreamless sleep. Should we say of somebody who's fainted or knows that they're subject to fainting spells, they never actually believe that they pass out? That would be silly. Of course, they believe they pass out. From the mere fact that they can't picture it from the inside, it doesn't follow that nobody believes they're ever in dreamless sleep. From the mere fact that they can't picture from the inside what it's like to have fainted and not yet woken up, it doesn't mean that nobody believes that they ever faint. From the mere fact that you can't picture from the inside what it's like to be dead, it doesn't follow that nobody believes they're going to die. But didn't I start off by saying I was going to grant the person who is making this argument that in order to believe something, you've got to be able to picture it? And haven't I just said, "Look, you can't picture being dead"? So aren't I taking it back? Since I say you can believe you're going to die, yet you can't picture it from the inside. Haven't I taken back the assumption that in order to believe it, you've got to be able to picture it? Not quite. Although I am skeptical about that claim, I am going to continue giving it to the person who makes this argument, because I'm not so prepared to admit that you can't picture being dead. You can picture being dead, all right. You just can't picture it from the inside. You can picture it from the outside. I can picture being in dreamless sleep quite easily. I'm doing it right now. I've got a little mental image of my body lying in bed asleep, dreamlessly. I can picture fainting, or having fainted, quite easily. Picture my body lying on the ground unconscious. I can picture my being dead quite easily. It's a little mental picture of my body in a coffin. No functioning occurring in my body. So even if it were true that belief requires picturing, and even if were true that you can't picture being dead from the inside, it wouldn't follow that you can't believe you're going to die. All you have to do is picture it from the outside. We're done. So I conclude, of course you can and do believe you're going to die. But at this point, the person making the argument has a possible response. And it's a quite common response. He says, "Look, I try to picture the world--admittedly from the outside--I try to picture the world in which I don't exist, I'm no longer conscious. I'm no longer a person, no longer experiencing anything. I try to picture that world. I picture, for example, seeing my funeral. And yet, when I try to do that, I'm observing it. I'm watching the funeral. I'm seeing the funeral. Consequently, I'm thinking. So I haven't really imagined the world in which I no longer exist, a world in which I'm dead, a world in which I'm incapable of thought and observation. I've smuggled myself back in as the observer of the funeral." Every time I try to picture myself being dead, I smuggle myself back in, conscious and existing as a person, hence, not dead as a person. Maybe my body--I'm imagining my body dead, but I'm not imagining myself, the person, dead. From which it follows, the argument goes, that I don't really believe I'll ever be dead. Because when I try to imagine a world in which I'm dead, I smuggle myself back in. This argument shows up in various places. Let me mention, let me quote one case of it, Freud. Freud says, this is, I'm quoting from one of the Walter Kaufman essays that you'll be reading, called "Death." He quotes Freud. Freud says, After all, one's own death is beyond imagining, and whenever we try to imagine it we can see that we really survive as spectators. Thus, the dictum could be dared in the psychoanalytic school: at bottom, nobody believes in his own death. Or, and this is the same: in his unconscious, every one of us is convinced of his immortality. All right, there's Freud. Basically, just running the argument I've just sketched for you. When you try to imagine your being dead, you smuggle yourself back in as a spectator. And so, Freud concludes, at some level none of us really believes we're going to die. I want to say, I think that argument's a horrible argument. How many of you believe that there are meetings that take place without you? Suppose you're a member of some club and there's a meeting this afternoon and you won't be there, because you've got to be someplace else. So you ask yourself, "Do I believe that meeting's going to take place without me?" At first glance, it looks like you do, but here's the Freudian argument that shows you don't really. Try to imagine, try to picture that meeting without you. Well, when you do picture it, there's that room in your mind's eye. You've got a little picture of people sitting around the table perhaps, discussing the business of your club. Uh-oh, I've smuggled myself in as a spectator. If, like you--, I think most of us picture these things up from a perspective in a corner of the room, up on the wall, looking down, kind of a fly's perspective. All right, I've smuggled myself in as a spectator. I'm actually in the room after all. So I haven't really pictured the meeting taking place without me. So I guess I don't really believe the meeting's going to take place without me. If Freud's argument for death, that is to say, none of us believe we're going to die, was any good, the argument that none of us believe meetings ever take place without us would have to work as well. But that's silly. It's clear that we all do believe in the possibility, indeed, more than a mere possibility, the actuality of meetings that occur without us. Even though when I imagine that meeting, I'm in some sense, smuggling myself in as an observer. From which I think it follows that the mere fact that I've smuggled myself in as an observer doesn't mean that I don't really believe in the possibility that I'm observing in my mind's eye. I can believe in the existence of a meeting that takes place, even though I smuggle myself in as an observer when I picture that meeting. I can believe in the possibility of a world without me, even though I smuggle myself in as an observer when I picture that world without me. Freud's mistake, and it's--although I'm picking on Freud, it's not only Freud that runs this sort of argument. One comes across it periodically. Within the last year, a member of our law school here put forward this very argument and said he thought it was a good one. So people think the argument's a good one. It strikes me as it's got to be a bad one. The confusion, the mistake I think people are making when they make this argument, the mistake I think they're making is this. It's one thing to ask yourself, what's the content of the picture? It's another thing to ask, when you look at the picture, are you existing? Are you looking at the picture from a certain point of view? Suppose I hold up a photograph of a beach with nobody on it. All right, am I in that beach, as pictured in that photograph? Of course not. But as I look at it, whether in reality or in my mind's eye, I'm looking at it from a perspective. As I think about it, I'm viewing the beach from a point of view which may well be on the beach, if somebody draws a painting of a beach. But for all that, that doesn't mean that within the picture of the beach, I'm in the beach. Looking at a picture doesn't mean you're in the picture. Viewing the meeting from a point of view, doesn't mean you're in the meeting. Viewing the world without you from a point of view, doesn't mean you're in the world. So although of course it's true, when I imagine these various possibilities without me, I'm thinking about them. I'm observing them. And I'm observing them from a particular perspective, from a particular standpoint. For all that, I'm not in the picture that I'm thinking about. So I think the Freudian argument just fails. Now, maybe there's some other reason to believe the claim that nobody believes they will cease to exist. But if there is another argument for that claim, I'm eager to hear it, because this argument, at any rate, seems to me to be unsuccessful. Now, at the start, I distinguished two claims people might have in mind when they say, "Nobody believes they're going to die." The first possibility was the claim was, nobody believes that they'll ever cease to exist as a person. And I've just explained why at least the most familiar argument for that claim, I think, doesn't work. The second possible interpretation was this. Nobody believes their body is going to die. That is, the more familiar humdrum event of death where your body ceases functioning and you end up having a corpse that gets buried and so forth. Sometimes it's suggested that nobody believes that either. Of course, often, I think, people run together these two questions. When they say you don't believe you're going to die, do you mean, you don't believe your body's going to die? or you don't believe you're going to cease to exist as a person? Maybe when people make the claim, it's not clear which of these things they've got in mind. But let's, at least, try to now focus on the second question. Could it be true, is there any good reason to believe it is true, that nobody believes they're going to undergo bodily death? Now, after all, even if you believe that, well, your soul will go to heaven so you won't cease to exist as a person, you might still believe that your body will die. Most of us presumably do believe our bodies will die. At least, that's how it seems to me. So it's a bit odd to suggest, as it nonetheless does get suggested, that no, no, at some level, people don't really believe they're going to die. Let me point out just how odd a claim that is. Because people do all sorts of behaviors which become very, very hard to interpret if they don't really believe their bodies are going to die. People, for example, take out life insurance so that--well, here's what seems to be the explanation. They believe that there's a decent chance that they will die within a certain period of time. And so, if that happens, they want their children and family members to be cared for. If you didn't really believe you were going to die, that is undergo bodily death, why would you take out life insurance? People write wills. "Here's what you should do with my estate after I die." If you didn't really believe that your body was going to die, why would you ever bother writing a will? Since many people write wills, many people take out life insurance, it seems as though the natural thing to suggest is that many, or at least perhaps most, at least many people believe they're going to die. Why would we think otherwise? Well, the reason for thinking otherwise, the reason for not being utterly dismissive of this suggestion, is that when people get ill, terminally ill, it often seems to take them by surprise. So I've been having you read Tolstoy's novella, The Death of Ivan Ilyich. Ivan Ilyich falls, he hurts himself. The injury doesn't get better. He gets worse and worse and eventually it kills him. The astonishing thing is that Ivan Ilyich is shocked to discover that he's mortal. And of course, what Tolstoy is trying to convince us of, what he's trying to argue, by illustrating the claim, I take it, that Tolstoy is making, is that most of us are actually in Ivan Ilyich's boat. We give lip service to the claim that we're going to die, but at some level, we don't really believe it. And notice again, just to emphasize the point, the relevant lack of belief here has to do with the death of the body. That's the thing that Ivan Ilyich is skeptical about. Is his body going to die? Is he mortal in that sense? This is what takes him aback, to discover that he's mortal. For all we know, Ivan Ilyich still believes in souls, believes he's going to go to heaven and so forth. So it's not his death as a person that he's puzzled by. He may not think he's going to die as a person. It's his bodily death that surprises him, his bodily mortality that surprises him. Tolstoy draws a highly realistic and believable portrait of somebody who is surprised to discover that he's mortal. As he puts it, there's a famous syllogism that people learn in their logic classes from Aristotle. All men are mortal. Socrates is a man, so Socrates is mortal. Ivan Ilyich says, "Yes, yes, I knew that. But what did that have to do with me?" Well, it may be a kind of irrationality. It may be a kind of failure to conduct the logic. But we're not asking, is it rational or irrational to not believe that your body's going to die, we're simply asking, noting the fact that, there to seem to be cases where people are surprised to discover that they're mortal. Now, for all that, notice, I presume that Ivan Ilyich had a will. And for all I know, Ivan Ilyich had life insurance. So we're in the peculiar situation where on the one hand, some of Ivan Ilyich's behaviors indicate that he believed he was mortal, that his body was going to die. And yet, the shock and surprise that faces, that overcomes him when he actually has to face his mortality, strongly suggests that he's reporting correctly. He didn't believe he was going to die. How could that be? There's a kind of puzzle there as to--even if, before we move to the question, how widespread are cases like this? there's a puzzle as to how are we even to understand this case? We need to distinguish perhaps between what he consciously believes and what he unconsciously believes. Maybe at the conscious level he believed he was mortal, but at the unconscious level he believed he was immortal. Or maybe we need to distinguish between those things he gives a kind of lip service to, versus those things he truly and fundamentally believes. Maybe he gives lip service to the claim that he was mortal. If you would have asked him "Are you mortal?" he would have said "Oh, of course I am." And he buys life insurance accordingly. But does he thoroughly and truly and fundamentally believe he's mortal? Perhaps not. We need some such distinction if we're going to make sense of Ivan Ilyich. Well, let's suppose we've done it. Still have to ask, not, are there are ever cases of people who don't believe they're going to die? but rather, is there any good reason to think that we're all or most of us are in that situation, are in that state of belief where, although we give lip service to the claim that we're going to die, is there any good reason to believe that fundamentally we don't actually believe it? That's the question we have to turn to next time. |
YaleCourses_Philosophy_of_Death | 3_Arguments_for_the_existence_of_the_soul_Part_I.txt | Professor Shelly Kagan: Today we're going to take up the discussion where we left it last time. We were talking about two main positions with regard to the question, "What is a person?" On the one hand, we have the dualist view; that's the view that we spent a fair bit of time sketching last meeting. The dualist view, according to which a person is a body and a soul. Or perhaps, strictly speaking, what we should say is the only part that's essential to the person is the soul, though it's got a rather intimate connection to a particular body. That's the dualist view. In contrast to that, we've got the physicalist view, according to which there are just bodies. A person is just a body, as we might put it. Now, the crucial point here, the point I was turning to as we ended last time, is that although a person on the physicalist view is just a body, a person isn't just any old body. A person is a body that has a certain set of abilities, can do a certain array of activities. People are bodies that can think, that can communicate, that are rational, that can plan, that can feel things, that can be creative, and so forth and so on. Now, we might argue about what's the exact best list of those abilities. For our purposes, I think that won't be crucial, and so I'll sometimes talk about this set of abilities without actually having a canonical list. Just think of them as the set of abilities that people have, the things that we can do that other physical objects--chalk, radios, cars--those things can't do. Call those the abilities that make something a person. To just introduce a piece of jargon, we could call those the P abilities, P for person. Or we could talk about the various kinds of ways--this is the physicalist way of thinking about it--according to the physicalist, a person is just a body that has the ability to fulfill the various P functions. And we can talk, then, about a person as a P-functioning body. Or we could say that a person is a body that is P-functioning. It's important to see that the idea is, although it's a body, it's not just any old body. Indeed, it's not just any old human body. After all, if you rip out your gun, shoot me in the heart, I bleed to death, we still have a human body in front of us. But we don't have a P-functioning body. We don't have a body that's able to think, a body that's able to plan, to communicate, to be creative, to have goals. So the crucial thing about having a person is having a P-functioning body. Now, what's a mind on this view? On the physicalist view, it's still perfectly legitimate to talk about minds. The point, though, is that from the physicalist perspective, the best thing to say is, talk about a mind is a way of talking about these various mental abilities of the body. We nominalize it. We talk about it using a noun, the mind. But talk of the mind is just a way of talking about these abilities that the body has when it's functioning properly. This is similar, let's say, to talking about a smile. We believe that there are smiles. Physicalists don't deny that there are minds. Just like we don't deny, we all believe, that there are smiles. But what is a smile? Well, a smile is just a way of talking about the ability of the body to do something. This characteristic thing we do with our lips exposing our teeth and so forth. It's a smile, a rather dorky smile, but there's a smile. Now, if you were listing the parts of the body, you would list the teeth, you would list the lips, you would list the gums, you would list the tongue, but you wouldn't list the smile. So, should we conclude, as dualists, that smiles are these extra nonphysical things that have a special intimate relationship with bodies? Well, you could imagine a view like that, but it would be rather a silly view. Talk about a smile is just a way of talking about the body's ability to smile. There's no extra part. Even though we have a noun, the smile, that if you're not careful might lull you into thinking there must be a thing, the smile. And then you'd have all these metaphysical conundrums. Where is the smile located? It seems to be in the vicinity of the mouth. But the smile isn't the lips. The smile isn't the teeth. So it must be something nonphysical. No, that would just be a silly way to think about smiles. Talk of smiles is just a way of talking about the ability of the body to smile, to form a smile. That's an ability that we have, our bodies have. Similarly, then, according to the physicalist, talk of the mind, despite the fact that we have a noun there, is just a way of talking about the abilities of the body to do various things. The mind is just a way of talking about the fact that our body can think, can communicate, can plan, can deliberate, can be creative, can write poetry, can fall in love. Talk of all of those things is what we mean by the mind, but there's no extra thing, the mind, above and beyond the body. That's the physicalist view. So it's important, in particular, to understand that from the physicalist's point of view, the mind is not the brain. You might think, "Look, according to physicalists minds are just brains." And that wouldn't be a horrendously misleading thing to say, because according to the best science that we've got, the brain is the part of the body that is the seat or house or the underlying mechanical structure that gives us these various abilities. These P functions are functions that we have by virtue of our brain. So that might tempt you into saying the mind on the physicalist view is just a brain. But we probably shouldn't say that. After all, if you shoot me, there's my corpse lying on the stage. Well, there's my brain. My brain is still there in my head. But we no longer have a person. The person has died. The person, it seems, no longer exists. Whether strictly that's the best thing to say or not is a question we'll have to come to in a couple of weeks. But it seems pretty clear that the mind has been destroyed, even though the brain is still there. So I think, at least when there's the need to be careful--maybe we don't normally have a need to be careful--but when there's the need to be careful, we should say, talk of the mind is a way of talking about the P-functioning of the body. Our best science suggests that a well-functioning body can perform these things, can think and plan and fall in love by virtue of the fact that the brain is functioning properly. That's the physicalist view. On the dualist view, what was death? Death is presumably the separation of the mind and the body, perhaps the permanent separation, with the destruction of the body. What's death on the physicalist view? Well, there is no extra entity, the soul. The mind is just the proper P-functioning of the body. So, the mind gets destroyed when the ability of the body to function in that way has been destroyed. Death is, roughly, the end of this set of functioning. Again, this probably should be cleaned up and in a couple of weeks we'll spend a day or half period trying, to clean it up and make it somewhat more precise. But there's nothing mysterious about death from the physicalist point of view, at least about the basic idea of what's going on in death. I've got a stereo. Suppose I hold up my boombox for you and it's playing music. It's one of the things it can do. And I drop in on the ground, smashing it. Well, it no longer can function properly. It's broken. There's no mystery why it can't function once it's broken. Death is basically just the breaking down of the body, on the physicalist point of view, so that it no longer functions properly. One other point worth emphasizing and sketching the physicalist view is this. So, as I said, physicalists don't deny that there are minds. Even though we say "we're just bodies," that doesn't mean that we're just any old body. It's not as though the physicalist view is, "we're bodies that have some illusion of thinking." No, we're bodies that really do think. So there really are minds. We could, on the physicalist point of view, call those souls. Just like there's no danger in talking of the mind from the physicalist perspective, there wouldn't be any serious danger in talking about a soul. And so, in certain contexts, I'm perfectly comfortable--in my physicalist moods, I am perfectly comfortable--talking about this person's soul. He's got a good soul, a bad soul, how the soul soars when I read Shakespeare, or what have you. There's nothing upsetting or improper about the language of the soul, even on the physicalist point of view. But in this class, just to try to keep us from getting confused, as I indicated before and I want to remind you, I'm going to save the word "soul"; I'm going to at least try to save the word "soul" for when I'm talking about the dualist view. So we might put it this way. The neutral term is going to be "mind." We all agree that people have minds, sort of the house or the seat of our personalities. The question is, "What is a mind?" The dualist position is that the mind is a soul and the soul is an immaterial object. So when I use the word "soul," I will try to reserve it for the metaphysical view, according to which souls are something immaterial. In contrast to that, we've got the physicalist view. Physicalists also believe in minds. But minds are just a way of talking about the abilities of the body. So physicalists do not believe in any immaterial object above and beyond the body that's part of a person. Just to keep things clear, I will say that physicalists, materialists, do not believe in souls. Because, for the purposes of this class, I'm going to reserve the word "soul" for the immaterialist conception of the mind. In other contexts--no harm in talking about souls. So these are the two basic positions: the dualist view on the one hand, the physicalist view on the other. The question we need to turn to--I take it that just as the dualist view is a familiar one, so it's true that the physicalist view is a familiar one. Whether or not you believe it, you are familiar with the fact that some people believe it, or at least you wonder whether it's true. Does science require that we believe in the physicalist view or not? The question we want to turn to, then, is, "Which of these two views should we believe: the dualist position or the physicalist position?" And the crucial question, presumably, is, "Should we believe in the existence of a soul?" Both sides believe in bodies. As I say, the dualist position, as we're understanding it, is not a view that says there are only minds, there are no bodies. Dualists believe that there are bodies. They believe that there are souls as well as bodies. Physicalists believe there are bodies but no souls. So there's an agreement that there are bodies. Here is one. Each one of you is sort of dragging one around with you. There's agreement that there's bodies. The question is, "Are there anything beyond bodies?" Is there anything beyond the body? Is there a soul? Are there souls? That's the question that's going to concern us for a couple of weeks. If we ask ourselves, "What reasons do we have to believe in a soul?" we might start by asking, what reasons do we have to believe in anything? How do we prove the existence of things? For lots of familiar everyday objects, the answer is fairly straightforward. We prove their existence by using our five senses. We just see them. How do I know that there are chairs? Well, there are some chairs in front of me. Open my eyes, I see them. How do I know that there is a lectern? Well, I see it. I can touch it. I feel it. How do I know that there are trees? I see them. How do I know that there are birds? I see them. I hear them. How do I know that there are apples? I see them. I taste them. So forth and so on. That approach pretty clearly isn't going to work for souls, because a soul--and again, we've got in mind this metaphysical view, according to which its something immaterial--isn't something we see. It's not something we taste or touch or smell or hear. We don't directly observe souls with our five senses. You might wonder, well, don't I sort of directly observe it in myself that I have a soul? Although I guess there have been people who've made that sort of claim, it seems false to me. I can only ask each of you to sort of introspect for a second. Turn your mind's eye inward and ask. Do you see a soul inside you? I don't think so. I see things outside me. I feel certain sensations in my body, but it doesn't seem as though I observe a soul. Even if I believe in a soul, I don't see it. How do we prove the existence of things we can't see or hear or taste and so forth? The usual method, maybe not the only method, but the usual method is something like this. Sometimes, we posit the existence of something that we can't see so as to explain something else that we all agree takes place. Why do I believe in the existence of atoms? I don't see individual atoms. Why do I believe in the existence of atoms so small that I can't see them? Because atomic theory explains things. When I posit the existence of atoms with certain structures and certain sort of ways of interacting and combining and building up, when I posit atoms, suddenly I can explain all sorts of things about the physical world. So, I infer the existence of atoms based on the fact that doing that allows me to explain things that need explaining. This is a kind of argument that we use all the time. How do I posit--why do I believe in x-rays, even though I don't see them? Because doing that allows me to explain certain things. Why do I believe in certain planets too far away to be observed directly through a telescope? Because positing them allows you explain things about the rotation of the star or the gravitational fluctuations, what have you. We make inferences to the existence of things we can't see, when doing that helps us to explain something we can't otherwise explain. This pattern of argument, which is ubiquitous, is called "inference to the best explanation." I want to emphasize this bit about "best explanation." What we're justified in believing are those things that we need, not simply when they would offer us some kind of explanation, but when they offer us the best explanation that we can think of. So look, why am I justified in believing in germs, various kinds of viruses that I can't see, or bacteria or what have you, that I can't see? Because doing that allows me to explain why people get sick. But there's other things that would allow me to explain that as well. How about demons? I could believe in demons and say, "Why does a person get sick and die? Well, it is demonic possession." Why aren't I justified in believing in the existence of demons? It's a possible explanation. But what we seem to be justified in believing is not just any old explanation, but the "best explanation." So we've got two rival explanations. We've got, roughly, germ theory and we've got demon theory. We have to ask ourselves, "Which of these does a better job of explaining the facts about disease?" Who gets what kinds of diseases? How diseases spread, how they can be treated or cured, when they kill somebody. The fact of the matter is, demon theory doesn't do a very good job of explaining disease, while germ theory does do a good job. It's the better explanation. So we're justified in believing in germs, but not demons. It's a matter of inference, not just to any old explanation, but inference to the best explanation. All right, so, what we need to ask ourselves, then, is, "What about the soul?" We can't observe souls. But here's a possible way of arguing for them. Are there things that need to be explained that we could explain if we posited the existence of a soul, an immaterial object, above and beyond the body? Are there things that the existence of a soul could explain and explain better than the explanation that we would have if we had to limit ourselves to bodies? You might put it this way as sort of the easiest version of this kind of argument, for our purposes. Are there things about us that the physicalist cannot explain? Are there mysteries or puzzles about people that the physicalist just draws a blank, but if we become dualists, we can explain these features? Suppose there was a feature like that, feature F. Then we'd say, "Look, although we can't see the soul, we have reason to believe in the soul, because positing the existence of a soul helps us to explain the existence of feature F, which we all agree we've got." Suppose it was true that you couldn't explain love from the physicalist perspective. But we all know that people do fall in love, but souls would allow us to explain that. Boom, we'd have an argument for the existence of a soul. It would be an example of "inference to the best explanation." Now, the crucial question, of course, is, "What's the relevant feature F?" Is there some feature that the physicalist can't explain and so we need to appeal to something extra-physical to explain it? Or the physicalist can only do a rotten job of explaining, like demon theory did? And then, if we were to appeal to something nonphysical, we would do a better job of explaining. If we could find the right F, and make out the argument, the physicalist can't explain it or does a bad job of explaining it and the dualist does a better job of explaining it, we'd have reason to believe in the soul. Like all arguments in philosophy, it would be a tentative argument. We'd sort of have some reason to believe in the soul until we sort of see what next argument comes down the road. But at least it would give us some reason to believe in the soul. What I want to do is ask, "What might feature F be?" Is there any such feature F? It's probably also worth underlining the fact that what I've really been doing is running through a series of arguments. "Inference to the best explanation" is not a single argument for the soul. It's rather the name for a kind of argument. Depending on what F you fill in the blank with, what pet feature or fact you're trying to explain by appeal to the soul, you get a different argument. So let's ask ourselves, "Are there things that we need to appeal to the soul in order to explain these things about us?" Here's a first try. Actually, let me start by saying I'm going to distinguish two broad families of characteristics we might appeal to. We might say, one set of approaches focus on ordinary, familiar, everyday facts about us. The fact that we love, the fact that we think, the fact that we experience emotions, what have you--these are ordinary features of us. I'm going to start with those and then I'll turn, eventually, to another set of possible things that might need explaining, which we might think of as extraordinary, supernatural things. Maybe there are certain supernatural things about communication from the dead or near-death experiences that need to be explained in terms of the soul. We'll get to those, but we'll start with ordinary, everyday, hum-drum facts about us. Even though they're ordinary and familiar, it still could turn out that we need to appeal to souls in order to explain them. So, to start, how about this? Start with a familiar fact, which I've already drawn your attention to a couple of times, that you can have a body that's dead. You could have a corpse, and that's clearly not a person. It's not a living being. It's not a person. It doesn't do anything. It just lies there; whereas your body, my body is animated. I move my hands around, my mouth is going up and down, it walks from one part of the stage to the other part of the stage. Maybe we need to appeal to the soul in order to explain what animates the body. The thought would be, when the soul and the body have been separated--such the dualist explains--the soul has lost its ability to give commands to the body. So the body is no longer animated. So we've got a possible explanation of the difference between an animated and an unanimated or an inanimate body to it. Is the soul in contact of the right sort with the body? There's a possible explanation. You might say, "Look, the physicalist can't tell us that, because all the physical parts are still there when you've got the corpse, at least if it's a fresh corpse before the decay has set in. So, we need to appeal to the existence of a soul in order to explain the animation of bodies like the ones that you and I have." Well, I said I was going to run through a series of arguments but that doesn't mean that--the lights have just turned off; I don't know why--that doesn't mean that I think the arguments will all work. I announced on the first day of class that I don't, myself, believe in the existence of a soul. As such, it shouldn't be any surprise to you that what I'm going to do as we run through each of these arguments is to say, "I'm not convinced by it and here's why." Now since I think that the arguments I'm about to sketch--and I've just started sketching the first of is--fails I hope you'll think it over and you'll eventually come to agree with me, yeah, these arguments don't really work after all. But what's more important to me is that you at least think about each of these arguments. Is this a convincing argument for the existence of a soul? If you think so, what response do you want to offer to the objections that I'm giving? If this argument doesn't work, is there another argument for the existence of a soul that you think is a better one? First argument, you need the soul in order to explain the animation of the body. From the physicalist point of view, of course, the answer is going to be "too quick." To have an animated body, you need to have a functioning body. It's true that when you've got a corpse, you've got all the parts there, but clearly they're not functioning properly. But all that shows us is, the parts have broken. Remember my stereo? I dropped my stereo. It falls on the stage. It doesn't work anymore. It stops giving off music. My boombox stops giving off music. That's not because previously--we had a CD inside of it, we had some batteries. We dropped the whole thing. It's not as though previously there was something nonmaterial there. We've got all the same parts there, but the parts are now broken. They're not connected to each other in the right way. The energy is not flowing from the batteries through the wires to the CD component. There's nothing mysterious from the physicalist perspective about the idea that a physical object can break. Although we need to offer a story about what makes the parts work when they're connected with each other and interacting in the right way, there's no need to appeal to anything beyond the physical. Suppose we try to refine the argument. Suppose we say, "You need to appeal to the soul in order to explain not just that the body moves around, flails, but the body acts purposefully." We need something to be pulling the strings, to be directing the body. That's what the soul does, so says the dualist. In response, the physicalist is going to say, "Yes it's true that bodies don't just move around in random patterns." Human bodies don't do that. So we need something to direct it, but why couldn't that just be, one particular part of the body plays the part of the command module? Suppose I've got a heat-seeking missile which tracks down the plane. As the plane tries to dodge it, the missile corrects its course. It's not just moving randomly, it's moving purposefully. There had better be something that explains, that's controlling, the motions of the missile. But for all that, it could just be a particular piece of the missile that does it. More gloriously, we could imagine building some kind of a robot that does a variety of tasks. It's not moving randomly, but the tasks are all controlled by the CPU within the robot. The physicalist says we don't need to appeal to anything as extravagant as a soul in order to explain the fact that bodies don't just move randomly, but they move in purposeful ways that are controlled. For each objection, there's a response. You could imagine the dualist coming back and saying, "Look, in the case of the heat-seeking missile or the robot for that matter, although it's doing things, it's just obeying orders. And the orders were given to it from something outside itself." Something programmed the robot or the missile. So don't we need there to be something outside the body that programs the body? That could be the soul. That's a harder question. Must there be something outside the body that controls the body? One possibility, of course, is, why not say that people are just robots as well and we get our commands from outside? On a familiar religious view, God built Adam out of dirt, out of dust. Adam is just a certain kind of robot then. God breathes into Adam. That's sort of turning it on. Maybe people are just robots commanded from outside by God. But that doesn't mean that there's anything more to us than there is to the robot. That's one possible response. A different response, of course, is why couldn't we have robots that just build more robots? Then, if you ask, "Where did the commands come from?" the answer is, "When they were built, they were built in such a way as to have certain instructions that they begin to follow out." Just like people have a genetic code, perhaps, that gives us various instructions that we begin to follow out, or certain innate psychology or what have you. The argument quickly becomes very, very messy. The fan of the soul begins to want to protest, "Look, we're not just robots. We're not just robots with some sort of program in our brain that we're following. We've got free will. Robots can't have free will. So there's got to be something more to us than robots. We can't just be physical things." This is an interesting argument, and I think it's a new argument. We started with the idea you needed to appeal to souls in order to roughly explain why human bodies move, why we're animated or why we move in nonrandom ways. I think it's fairly clear that you don't need to appeal to souls in order to do that. Appeal to a physical body suffices, I think, to have an explanation as to the difference between an animated and an inanimate body, how bodies will move in nonrandom ways. If the brain is our CPU, then we'll behave in deliberate, purposeful ways just like a robot will behave in deliberate, purposeful ways. So this initial argument, I think, is not compelling. Still, we might wonder, what about this new argument? What about the fact that--We said there's a family of arguments, all of which have the general structure, inference to the best explanation, you need souls in order to explain feature F. Plug in a different feature F and you get a new argument. The one we started with--you need the soul to explain the animation of the body--that argument, I think, doesn't work. Now we've got a new one. You need the soul in order to explain free will. Let me come back to that argument later. It's a good argument. It's an argument well worth taking seriously, but let's come back to it later. First, let's run through some other things that might be appealed to as candidates for feature F. Suppose somebody says, "Look, it's true that we don't need to appeal to souls in order to explain why bodies move around in a nonrandom fashion. But people have a very special ability"--and so the argument goes--"that mere bodies couldn't have, physicalists can't explain. That's the ability to think. It's the ability to reason. People have beliefs and desires. And based on their beliefs about how to fulfill their desires, they make plans. They have strategies. They reason about what to do. This tightly connected set of facts about us--beliefs, desires, reasoning, strategizing, planning--you need to appeal to a soul"--so the argument goes--;"to explain that. No mere machine could believe. No mere machine has desires. No mere machine could reason." It's easy to see why you might think that sort of thing when you stick to simple machines. It's pretty clear that there are lots of machines that it doesn't seem natural to ascribe beliefs or desires or goals or reasoning to. My lawnmower, for example, doesn't want to cut the grass. Even though it does cut the grass, it doesn't have the desire. It doesn't think to itself, "How shall I get that blade of grass that's been eluding me?" So it's easy to see why we might be tempted to say no mere machine could think or reason or have beliefs or desires. That argument's much less compelling nowadays than I think it would have been 20 or 40 years ago. In an era of computers with quite sophisticated computer programs, it seems, at the very least, natural to talk about beliefs, desires, and reasoning and strategizing. So suppose, for example, we've got a chess-playing computer. On my computer at home I've got a program that allows my computer to play chess. I, myself, stink at chess. This program can beat me blind. I move my bishop, the computer moves its queen. What do we say about the computer? Why did the computer move its queen, or virtual queen? Why did the computer move its queen? The natural thing to say is, it's worried about the fact that the king is exposed and it's trying to block me by capturing my bishop. That is what we say about computer-playing programs. Think about what we're doing. We're ascribing desires to the program. We're saying it's got an ultimate desire to win the game. A certain subsidiary desire is to protect its king, to capture my king. A certain other subsidiary desire is, no doubt, to protect its various other pieces along the way. It's got beliefs about how to do that by blocking certain paths or by making other pieces on my side vulnerable. It's got beliefs about how to achieve its goals. Then, it puts those combinations of beliefs and desires into action by moving in a way that's a rational response to my move. It looks as though the natural thing to say about the chess-playing computer is, it does have beliefs. It does have desires. It does have intentions. It does have goals. It does reason. It does all of this. It's rational to this limited extent. It's only able to play chess. But to that extent, it's doing all these things and yet we're not tempted to say, are we, that the computer has a nonphysical part? We can explain how the computer does all of this in strictly physical terms. Of course, once you start thinking of it this way, it's natural to talk this way across a variety of things that the computer may be trying to do. It's perfectly open to you, as dualists, to respond by saying, "Although we personify the computer, we treat it as though it was a person, as though it had beliefs and desires and so forth, it doesn't really have the relevant beliefs and desires, because it doesn't have any beliefs and desires, because no physical object could have beliefs and desires." In response to that, I just want to say, "Isn't that just prejudice?" Of course, it is true that if we simply insist no physical object could really have beliefs or desires, then it will follow that when we are tempted to ascribe beliefs and desires to my chess-playing computer, we're falling into an illusion. That will follow once we assume that no physical object has beliefs or desires. But what reason is there for saying it has no beliefs or desires? What grounds are there for withholding ascriptions of beliefs and desires to the computer? That's far from obvious. Here's a possibility. Desires, at the very least, seem to be, at least in typical cases, very closely tied to a series of emotions. You get excited when you're playing chess at the prospect of capturing my queen and crushing me. You get worried when your pieces are threatened. Of course, more generally, you get excited, your heart goes pitter-pat, when your girlfriend or boyfriend says they love you. Your stomach sinks, you have that sinking feeling in the pit of your stomach, when you get a bad grade on a test. Maybe what's really going on is the thought that there's an aspect of desire that has a purely behavioral side, that's moving pieces around in a way that would make sense if you had this goal. And maybe machines can do that. But there's an aspect of desires, the emotional side, that machines can't have, but we clearly do have. Maybe we want to build that emotional side into talk of desires. So maybe if we want to say machines don't have a mental life and couldn't have a mental life, what we really mean is no machine could feel anything emotionally. So let's distinguish. Let's say there's a way of talking about beliefs and desires which is just going to be captured in terms of responding in a way that makes sense given the environment. Maybe computers and robots could do that. But there's clearly a side of our mental life, the emotional side, where we might really worry, could a robot feel love? Could it be afraid of anything? Again, our question was, "Do we need to appeal to souls to explain something about us?" The physicalist says "no"; the dualist says "yes." If what we mean is the mental, but that the aspect, the behavioral aspect of the mental, where even a chess-playing computer probably has it, then that's not a very compelling argument. The physicalist will say, "Look, that aspect of the mental is pretty clear. We can explain it in physical terms." But let's just switch the argument. What about emotions? Can a robot feel emotions? Could a purely physical being fall in love? Could it be afraid of things? Could it hope for something? The latest version of our argument then is, "People can feel emotions. But if you think about it, it's pretty clear no robot could feel emotions. No merely physical thing could feel emotions. So there must be more to us than a merely physical thing." That's the argument we'll start with next time. |
YaleCourses_Philosophy_of_Death | 9_Plato_Part_IV_Arguments_for_the_immortality_of_the_soul_cont.txt | Professor Shelly Kagan: We've been working our way through Plato's arguments for the immortality of the soul. And last time I spent a fair bit of time working through objections to, not quite the last argument we're going to look at, but the penultimate argument, in which Plato tries to argue for the simplicity of the soul. The set of connected ideas, you'll recall, were these: that Plato wants to suggest that in order to be destroyed you've got to have parts; to destroy something is to basically take its parts apart. If he could only convince us that the soul was simple, it would follow that it was indestructible and, hence, immortal. He asks, what's our evidence for some things being indestructible? What kinds of things are simple? Well, these are--he then goes on to claim--invisible things, things that don't change. After all, changing is a matter of the rearrangement of the parts. And so, if something can change, it can't be simple. Maybe it could be destroyed. But if we could become convinced that the soul was not composite, if it was something that couldn't change, then it would simple. Perhaps then it would indestructible. And then he goes on to suggest that the invisibility of the soul is evidence for it's being changeless, and hence simple, and hence indestructible. So that's the argument we worked through last time. And I spent a fair bit of time suggesting that if you pin down precisely what Plato means by invisible, the argument doesn't actually go through. Before leaving that argument, there are a couple of extra remarks I want to make about it. First, we probably shouldn't have been so quick to want to buy into the suggestion that the soul is changeless. After all, if you think about it, it seems that at least on the face of it the soul does indeed change. On one day you believe, for example, that it's hot; on another day you believe that it's cold. On one day you believe that so and so is a nice person; on the next day you believe that so and so is a mean person. You desire to learn the piano, the next day you give up on that desire. Your beliefs, your goals, your intentions, your desires--these things are all constantly changing. And so, at least on the face of it, it looks as though we might well want to say the soul--if we do believe there are souls--the soul is changing as well, in terms of what thoughts and beliefs it's housing. So we should have been skeptical in the first place of any argument that said, based on the invisibility of the soul, we can conclude that it's changeless. It doesn't seem to be in fact changeless. Furthermore, we should be, or at least we might well be, skeptical of the claim that the soul is simple. Indeed, Plato himself, in other dialogues, argues against the simplicity of the soul. Now, that doesn't mean he's right in the other dialogues, but at least suggests that we shouldn't be so ready to assume that sort of position is correct. In The Republic, famously, Plato goes on to argue that the soul has at least three different parts. There's a rational part that's in charge of reasoning; there's a spirited part that's sort of like the will; there's a part that has to do with appetite, desires for food, drink, sex, what have you. Plato elsewhere argues the soul is not simple at all. So perhaps it shouldn't shock us that the argument he's sketching here for the simplicity of the soul based on the changeless, invisible nature of the soul--perhaps it shouldn't shock us that that argument doesn't succeed after all. Finally, although I gave Plato, previously, the assumption that if only we could establish the simplicity of the soul, it would follow that soul was indestructible--after all, you couldn't break a soul by tearing its pieces apart if it didn't have pieces, if it didn't have parts--nonetheless, I just want to register the thought that it's not actually obvious that simples can't be destroyed. Well, they clearly can't be destroyed by the particular method of destruction that involves taking them apart. If they don't have parts, you can't take them apart. But for all that, it still seems conceptually possible for a simple to be destroyed in the following sense: it goes out of existence. After all, where did the simples come from in the first place? Well, at least from a logical point of view, it seems as though there's no difficulty in imagining that at one point a given simple didn't exist and then at the next point it popped into existence. Well, how did that happen? Maybe God said--God says at the beginning of Genesis, "Let there be light." So maybe He says, "Let there be simples." At a given moment they weren't there; the next moment they were. Well, after a while maybe God says, "Let the simples no longer exist." Given moment there they were; the next moment, they no longer exist. Seems as though that idea makes sense, and so even if we agreed that the soul was simple, even if we granted everything in Plato's argument up to this point and said, "the soul really is simple," it still wouldn't follow that it's immortal. We'd still have to worry about the possibility that the simple soul might simply pop out of existence at a given point, perhaps the very point when the body gets destroyed. So I'm inclined to think that this most recent argument of Plato's--the argument from simplicity--no, that's not successful either. Before leaving that argument, there's one other piece of business I want to discuss. This is a footnote that I put aside, a point that I put aside previously. You'll recall that we were worried about--The objection got raised the right way to think about the soul is like the harmony of a harp. And this was originally offered as a counterexample to the thought that invisible things couldn't be destroyed. But harmony could be destroyed. It was invisible, so invisible things could be destroyed. But I noticed, I mentioned that, look, whether or not this is a problem for the argument, it's an interesting suggestion in its own right. Because the suggestion that the mind is to the body, the soul is to the body, like harmony is to an instrument with strings, seems to me to be an early attempt to describe something like the physicalist conception of the mind. Just as harmony is something that gets produced by a well-tuned instrument, the soul or the mind is something that gets produced by a well-tuned body. Now Plato's got some objections to the suggestion that we should think of the mind as the harmony of the body. And so I want to take just a moment and talk about those objections because, of course, if they were compelling objections that might well give us reason to doubt the physicalist view. Whether or not Plato's arguments for the immortality of the soul work, he might still have some good arguments against the physicalist conception. But in thinking about these objections, it's important to bear in mind that it's only meant--the harmony analogy is only meant--as just that, as an analogy. Right? The claim isn't, or at least it shouldn't be, understood as saying literally, "the mind is harmony." It's rather, the mind is like harmony; it's the sort of thing to the body like harmony is to a harp, something that can be produced by a well-functioning, well-tuned physical object. A well-tuned instrument can produce melody and harmony. A well-tuned, properly functioning body can produce mental activity. That's the suggestion. And so even if it turns out that there are some ways in which the mind isn't exactly like harmony, it doesn't show us that the physicalist view is wrong. Well, so let's quickly look at what Plato's arguments were. First--this is, I think, an interesting argument--Plato says, harmony clearly cannot exist before the existence of the harp itself. Right? The melodiousness of the harp can't exist prior to the physical construction of the harp. And if mind were the sort of thing that was produced by the proper functioning of the physical body, then pretty obviously the mind could not exist prior to the creation of the physical body. However, Plato has already argued earlier in the dialogue that the soul does exist prior to the existence of the body. That's the argument from recollection. If the soul exists prior to the body, it can't be like harmony; physicalism has clearly got to be false. But I said that I didn't find the argument--I tried to explain why I didn't find the argument from recollection persuasive. I certainly do want to agree that if we became convinced that the soul did exist prior to the existence of the body, we would certainly want to agree that the soul is not like harmony. But I don't think the argument from recollection succeeds. Plato's second objection is to point out that harmony can vary. We talk about the melodiousness of the harp. Well, it could be harmonious in a variety of different ways and, indeed, to different degrees. Something--an instrument--could be more or less harmonious. What it's playing can be in greater or lesser harmony. But it doesn't seem as though souls come in degrees. You've got a soul or you don't have a soul. That's the argument, that's the objection. You've got a mind or you don't have a mind. But perhaps we should--That's the objection, and of course if that was right then again we might have to conclude, well, whatever the mind is, it's not quite like harmony is to the body. But I'm not so sure we should agree that the mind can't come in degrees. It can at least--The mental aspects can come in degrees. We can have varying degrees of intelligence, varying degrees of creativity, varying degrees of reasonableness, varying degrees of ability to communicate. So just as, we might say, just as the functioning of the harp can come in varying degrees--more or less harmony--the functioning of the body in terms of its mind can come in varying degrees. So that second objection doesn't seem to me very compelling. Third objection, Plato points out--Socrates points out--that the soul can be good or it could be evil, wicked. When the soul is good, when you've got somebody who has got their stuff together, we might speak of them as having a harmonious soul. If the soul were to the body like harmony is to the instrument and the soul can be harmonious, it would seem as though we'd have to be able to talk about harmony being harmonious. So just as we can talk about the harmony of the soul, we'd have to be able to--if the soul is like the harmony of the body--we'd have to be able to talk about the harmony of the harmony. But we don't talk about harmony of the harmony. I'm not quite sure what to make of this objection. This might be a point where it would be well to remind ourselves of the fact that the suggestion was never that the soul just literally is harmony. It's just similar to harmony, says the physicalist, in the way that harmony gets produced by the body--by the instrument. In that same way, mind or mental activity gets produced by the body. We don't have to say that everything that's true of the mind is true of harmony and everything that's true of harmony is true of the body--or the mind. Still, I think there's a bit more we can say in response to this objection, and that's this: Just as it's true that we can talk about minds or souls being good or wicked, we can talk about different kinds of harmony. There are--Certain harmonies are sweeter than others; some of them are more jarring and atonal or discordant. Although we might not normally talk about how harmonious the harmony is, it seems as though harmonies can come in different sorts and different kinds. And then, it turns out we really would have an analogy to the mind, which can come in different sorts and different kinds. So I think this third objection isn't really compelling either. Finally, Plato raises one more objection. He says, "Look, the soul is capable of directing the body, bossing it around, and indeed capable of opposing the body." You know, your body might want that piece of chocolate cake, but your soul says, "No, no. You're on a diet. Don't eat it." Right? Your soul can oppose the body. But if the soul was just harmony of the body, how could it do that? After all, the harmoniousness of the harp can't affect what the harp does. All the causal interaction is one way, as we might put it. In the case of the harp and the musicality and the melodiousness and the harmony, the physical state of the harp causes the melodiousness to be the way it is. But the harmoniousness of the harp, doesn't ever change or alter or direct the way the physical object the harp is. In contrast, not only can the body affect the soul, the soul can affect the body. So that suggests it can't really be like harmony and the harp after all. I think that's a pretty interesting objection. Since we do think, at least in the kind of position that we've been taking for this class, that the soul can affect the body, we might ask, how could it be that it's--that the physicalist view is right? If talk about the mind is just a way of talking about what the body can do, how can the abilities of the body affect the body itself? I think the answer to this objection is probably going to be something like, what's really going on when we talk about the soul affecting the body is that--when we say certain functions of the body are affecting the body--that certain mental functions are affecting the body--how does this happen causally? Well, something like the physical parts of the body that underwrite, that lie beneath the proper functioning, the proper mental functioning of body, those are able to alter the other parts of the body. So look, right now I'm telling by body, "Wiggle my fingers." My soul is giving instructions to my body. How does that happen? That's my mind giving instructions to my body. How does that happen on the physicalist view? Well, my mind giving instructions to my body, "wiggle my fingers," is just one part of my body, my brain, giving instructions to another part of my body, the muscles in my fingers. So, although we talk about the mind altering the body, strictly what's going on there, says the physicalist, is just one part of the body affecting another part of the body. Can we have something like that with a harp? Well, maybe not. Right? Maybe the harp's too simple a machine to have one part of it affect another part of it in that way. Even if that were true, that wouldn't give us reason to reject the physicalist conception. It would just give us reason to think the harp's not very much like the mind and the body. It's just the beginnings of a picture, of a physicalist picture. Still, even if we think about the harp and musicality, I think we can see something analogous. Suppose I pluck a string on my harp, producing a certain note. As we know, the vibrations of one string can set into play the other strings vibrating as well. And so, suddenly, what's happening in one part of the harp affects what's going on in other parts of the harp. The musicality of my playing a certain chord on the harp may create certain kinds of overtones in the harp, setting the harp vibrating in various other ways. Well, that would be analogous--perhaps not a precise analogy, but at least a rough analogy to what goes on when my mind affects my body--to--if one part of my body affects other parts of my body. So, on the one hand, I want to give Plato a fair bit of credit for taking the physicalist view seriously enough to try to criticize it. And since when he was writing there weren't the kind of complicated thinking machines that we've got nowadays, it's no criticism of Plato that he used simple machines like musical instruments to try to think about what a physicalist picture would look like. I want to give him credit, but I also want to suggest that the objections that he raises to the physicalist view just don't succeed. All right. Now, there's one other argument that I want to consider in our dialogue. And after the appeal to the simplicity of the soul, there's a very long complicated discussion about what constitutes an adequate explanation, and Socrates gives some of his history there and talks about what he's looking for in trying to find adequate explanations of things. And these passages are very, very difficult and happily for our purposes we don't really need to go there. Before the dialogue ends though, there's one further argument, which I'll dub, "the argument from essential properties." Now again, it's important to bear in mind as we try to make sense of this passage that Plato is writing at a time when we don't have, we didn't have, all the conceptual apparatus that we have nowadays. We stand on his shoulders; we've inherited some of the distinctions that he was the first to try to put into play. And so although, again, he's about to--I'm about to sketch or reconstruct an argument and claim that that the argument doesn't actually work, this isn't really meant by way of being dismissive of Plato. I want to give him a tremendous amount of credit. He's trying to see his way through a morass of issues that are still confusing to us today, though I think we can see somewhat further than he was able to see. At any rate, the distinction we need to understand the final argument, is the distinction between an essential property and a contingent property. An essential property is a property that a given object must have; it always has as long as it exists at all. A contingent property is a property that an object may have, may happen to have its entire existence, but could've existed without. So my car is blue. That's a contingent property of my car. I could take it to the paint shop and get it painted red, in which case it would be red. It would no longer be blue, but the car would still exist. My car is blue, but it could be red; it could exist as a red car. And even if I never, over the entire course of existence of my car, never get it painted, so that from the moment it came into creation to the moment it gets smashed it's always blue--still, we understand perfectly well the idea that it could've been red. There's nothing incompatible with the idea that this car exists and is red. So that's an example of a contingent property. And I might have a pencil, and the pencil is whole. And I never break it, but I could've broken it. That's a contingent property, whether the pencil is whole or broken. I take a piece of metal; it's a contingent property whether it's straight or bent. I bend it; now it's bent. I might straighten it back out; now it's straight. Many, many properties are contingent properties. You're happy, you're sad, you're awake, you're asleep. But some properties, in contrast, are essential properties. For the particular thing that we're thinking about, it's not possible to have that thing and not have the property in question. Plato gives the example of fire and being hot. Fire is hot. That's a property that it's got, but it's not a contingent property; it's an essential property. It's not as though some fire is hot and some fire is cold or, "Oh yes, it just happens that over the entire life of the fire the fire is hot, but we could have made it cold." There's no such thing; there could be no such thing as cold fire. As long as you've got a bit of fire, it's hot. Take away the heat, you take away the fire, you destroy the fire. You can't have cold fire. That's an example of an essential property. That is to say, Plato sees, as indeed I take it we all see at least roughly, that there's some sort of distinction there, and he's trying to see his way clear on these matters. That remains a controversial question today--until today. Are there really essential properties in the way we take there to be? If so, which properties are essential? Which ones are contingent? Water is composed of H_2O--that's its atomic structure. Is that an essential property of water? Could you have something that was water without being composed of H_2O--hydrogen and oxygen in that way? Well, some people say yes, some people say no--but most of us we want to say, "Oh, there's an example of an essential property. To be water, you must have that atomic structure." All right. That's the thought. Now, armed with this distinction, Plato says, "Here's an essential property for the soul. Wherever there's a soul, it's alive." Now, by "alive," I take it Plato means it's thinking, or it's capable of thought. Wherever you've got a soul, you've got something capable of thought. I suppose one could try to resist this claim of Plato's, but I find it reasonably plausible. I start thinking about minds, and I ask myself, "Could there be a mind that was incapable of thought?" Maybe not. Maybe that's just built into minds by definition. Just like you couldn't have something that was fire without it being hot, you couldn't have something that was a mind without it being capable of thought. It's important to say the word capable here. Right? It's not as though all minds always are thinking. I presume there are stretches during the night when my mind is not thinking, not dreaming. Still, it's capable of thought even thought it's not thinking at the time. But you say, "No. Here's a mind that's not even capable of thought." I want to say, "Then, it's just not a mind." So all right, maybe being capable of thought is an essential property of the mind. Plato thinks about the mind in terms of souls, so maybe being capable of thought is an essential property of the soul. And I think that's what Plato means when he suggests the mind is essential--the soul is essentially alive. It's a necessary property, as we might put it, of the soul, that it's alive, that it's capable of thought. So I want to say, "Not an implausible claim." Let's give it to Socrates. But once we give it to Socrates, Plato thinks now he's pretty much done. After all, think about what it means to say that something's got an essential property. Fire's got the essential property of being hot. It means there are only two possibilities. Either you've got some fire and it will be hot, or the fire has been destroyed, it's been put out. Those are the only two possibilities. You either have--If heat is an essential property of fire, either you've got some fire and it's hot, or the fire no longer exists, it's been put out. There's no third possibility of a non-hot fire, of a cold fire. So, if you've got the claim that life's an essential property of the soul, only two possibilities: either you've got the soul and it's alive--to wit, it's capable of thought--or the soul's been destroyed. But Plato thinks we can rule out that other possibility. How? Well, it's by thinking about this particular essential property. There's nothing in the idea that fire has the essential property of being hot to make us think it couldn't be destroyed, but there is something, Plato thinks, in the idea of being essentially alive to rule out the possibility of its being destroyed. In fact, as you say the very words you begin to feel the force, the pull of Plato's position. If the soul is essentially alive, if it's necessarily alive, it's got to be alive. It can't be destroyed. That's, I think, at least the kind of argument that Plato means to put forward. He does it in terms of the phrase, "deathless." He says, I want to actually get this up here on the board. One--life is an essential property of the soul. But if you think about what that means, it follows that the soul is deathless. After all, if the soul is--If it's essentially alive, that means it can't be dead. So it's deathless. But after all, anything that's deathless can't die. So the soul cannot die, which is just to say it's indestructible. So, soul can't be destroyed. Something like this seems to be Plato's argument. One, life's an essential property of the soul, but we can just summarize that by saying the soul is deathless. But if the soul is deathless, it can't die. If it can't die, it can't be destroyed, it's indestructible. So the soul can't be destroyed. Remember, once we said the soul was alive, there were only two possibilities. If the soul was essentially alive, either we have the soul, it's alive, capable of thought, or it's destroyed. But if the soul can't be destroyed, that leaves only the possibility the soul is alive, capable of thought. That's just what Plato thinks; the soul will always exist, capable of thought. Well, it won't shock you to hear that I don't think this argument actually works. And I think where it goes wrong is there's a certain kind of ambiguity in the idea of being deathless. What does it mean to say that something is deathless? I think there are two possible interpretations of that phrase. If something is deathless, then it can't be that--well, what? One possibility is, it can't be that the soul exists and is dead. That's one possible interpretation. To say that something is deathless means you'll never have a soul that exists and the same time that it exists it's dead. But there's a second possible interpretation of deathless. It can't be that the soul was destroyed. It's very easy to confuse these two interpretations of deathless, A and B. And basically, this is what I think is going on with Plato. He's running back and forth between these two interpretations. If life is an essential property of the soul, then that means we will never have, as it were, a soul in our hand that exists and is dead. Just in the same way that you'll never have a piece of fire in your hand, as it were, that exists and is cold. It can't happen. Wherever you've got a soul, it is alive. So it's deathless in sense number, in sense A. Since wherever you've got a soul it must be alive, it couldn't be the case that the soul exists and is dead. So it's deathless in sense A. But for all that, it could still be, logically speaking, that the soul could be destroyed, just like a fire can be put out. We could imagine something that couldn't be destroyed. Then of course it would be deathless in sense B, a much stronger sense of deathless. What Plato needs, what Plato wants, is to convince us that the soul is deathless in sense B: It's true of the soul that it can't be that it was destroyed. But all he's entitled to is sense A: You'll never have a soul that exists and is dead, because being alive is an essential property of the soul. But the mere fact that where there's a soul it's alive, doesn't mean the soul couldn't be destroyed. Just like from the fact that where there's fire it's hot doesn't mean the fire can't be destroyed. It's, I think, pretty easy to get confused in thinking about these issues. It's difficult to see your way clearly to these two different notions of deathless. It's difficult to get to the point where you can clearly use the language of essential properties without getting screwed up. Still, I think that's what happened here. We grant Plato the thought that the soul has an essential property of being alive; from this, it follows that where there's a soul it is alive, and hence, it's deathless in sense A. But once we start thinking about the category, the notion of being deathless, we're tempted to re-understand that as being deathless in sense B, can't be destroyed. And that, I think, doesn't follow. All right. Where does that leave us? Plato's gone through a series of arguments for the immortality, the indestructibility of the soul, and I've argued that none of them work. Some of them are worth taking seriously. That's why we've spent the last week or so going over them. But none of them, as far as I can see, are successful. And I hardly need remind you that this comes on the heels of a previous week or two in which we talked about various other arguments for the very existence of an immaterial soul. And I've argued that none of those arguments work either. As far as I can see then, the arguments that might be offered for the existence of an immaterial soul, let alone an immortal soul, the arguments don't succeed. It's not that the idea of a soul is in any way silly; it's not that it's not worth thinking about. It's that when we ask ourselves, "Do we have any good reason to believe in an immaterial soul?" and actually try to spell out what those reasons might be, as we look more carefully we see the arguments are not very compelling. So I'm prepared to conclude there is no soul. There's no good reason to believe in souls. And I so I conclude--at least there's no good enough reason to believe in souls--and so I conclude there are none. And this is the position that here on out I'm going to be assuming for the rest of the class. I'm going to have us continue to think about death, but now think about death from the physicalist perspective. Given the assumption that the body is all there is, that talk about the mind is just a way of talking about the abilities of the body to do certain special mental activities. There are no extra things beyond the body, no immaterial souls. Now, it wouldn't be unreasonable at this point to accuse me of begging the question. After all, think about what I've done. I've put all of the burden of proof on the fan of souls. I've asked the dualist, "Give me some reason to believe your position." And I've said the arguments on behalf of dualism aren't very convincing. Don't I now need, in fairness, to do the same thing for the physicalist? Don't I need to turn to the physicalist and say, "Give me some reason to believe that physicalism is true? Give me some reason to believe souls don't exist." After all, I turned to the dualist and said, "Give me some reason to believe in souls." Those arguments didn't work. Don't I now need to turn to the physicalist and say, "Give me some reason not to believe in souls? Prove that souls don't exist." Isn't that fair? So let's pause and ask ourselves, how do you go about proving that something doesn't exist? Or, to put it in a slightly better way, when do you need to prove that something doesn't exist? When we have examples of things whose existence we don't believe in, how do we decide when we're justified in disbelieving them? Take something like dragons. Let me assume that everybody in this class, in this room, does not believe in the existence of dragons. How do I prove that there aren't any dragons? I mean, there could be dragons. Couldn't there? But there aren't any. We don't believe in dragons. So don't you need to disprove the existence of dragons before you continue on your way of not believing in them? I imagine nobody in this room believes in the existence of Zeus, the Greek god. How do you disprove the existence of Zeus? Don't we have an obligation to prove that Zeus doesn't exist? But how could you do that? Well, unsurprisingly, I don't actually think you do have an obligation to disprove those things. That doesn't mean you don't have any obligations. You just have to be very careful about what the intellectual obligations come to. So back to dragons. What do we need to do for dragons? Well, the most important thing you need to do, to justify your skepticism about dragons, is to refute all of the arguments that might be offered on behalf of dragons. My son's got a book about dragons with some very nice photographs. So, one of the things I need to do in order to justify my skepticism about dragons is explain away the photographs, or the drawings, or what have you. I need to explain why it is that we have pictures, even though there really aren't any dragons. Well, some of these are just drawings, and people were drawing things out of their imagination. The things that look like photographs, nowadays with computer generated graphics, you can make things that look like photographs, and given Photoshop you can make things that look like pictures of just about anything that doesn't even exist. How do I prove there aren't any unicorns? Well, I look at the various reported sightings of unicorns and I try to explain them away, "Well, you know, it's the first time people, Europeans, saw the rhinoceros. It sort of reminded them of a horse with a big horn. And maybe that's where the reports of rhinoceros came--or the various reports of the unicorn came from. The various unicorn horns that have been offered in various collections, upon examination by biologists turn out to be narwhal horns, horns from whales, and so forth and so on." You look at each bit of evidence that gets offered on behalf of the unicorn and you debunk it. You explain why it's not compelling. And when you're done, you're entitled to say, "You know, as far as I can tell, there aren't any unicorns. As far as I can tell, there aren't any dragons." It's not as though you've got some obligation to look in every single cave anywhere on the surface of the Earth and say, "Oh, no dragons in there, no dragons in there, no dragons in there, no dragons in there, no dragons in there." You are pretty much justified in being skeptical about the existence of dragons once you've undermined the arguments for dragons. Now, there might be something more that you could do. In at least some cases, you can go on to argue the very idea of the kind of thing we're talking about is impossible. It's not just--Take dragons again; it's not just that there's no good reason to believe in dragons. The very idea of a dragon may be scientifically incoherent, at least given the science as we understand it. I mean, dragons are supposed to breath fire. So that must mean they've got fire in their belly. But how does the fire continue to exist in their belly, absent--lack of oxygen? Why isn't the fire in their belly busy burning and destroying the membranes of their stomach or whatever? All right, you could, I suppose, try to prove that dragons were scientifically impossible. And if you could, then you'd have an extra reason to not believe in them. But it's not as though you have to prove that something's impossible to be justified in not believing in it. I don't think unicorns are impossible. I just don't think there are any. Surely, there could be horses with a single long horn growing out of their forehead. There just aren't any. So armed with these ideas, come back to the discussion of souls. Do I, as a physicalist who does not believe in the existence of souls, immaterial entities above and beyond the body, do I need to disprove the existence of souls? "Well, there's no soul here, no souls there." No. What I need to do is to take a look at each argument that gets offered for the existence of a soul and rebut it--explain why those arguments are not compelling. I don't need to prove that souls are impossible. I just need to undermine the case for souls. If there's no good reason to believe in souls, that actually constitutes a reason to believe there are no souls. Now, if you want to, you could go on and try to prove that souls are impossible in the same way that maybe dragons are impossible. But I'm not sure that I myself find such impossibility claims especially persuasive. I don't believe in the existence of souls, but that doesn't mean that I find the idea of an immaterial entity like the soul impossible. Now, some people might say, "Well, you know, it violates science as we know it. It violates physics to have there be something immaterial." But science is constantly coming around to believe in entities or properties that it didn't believe in previously. Maybe it just hasn't gotten around to believing in souls yet. Or if current science rules out the possibility of souls, maybe we should say, "So much the worse for current science." So I'm not somebody who wants to say we can disprove the existence of souls. I don't think we can disprove them. I don't think the idea of a soul is in any way incoherent. There are philosophers who've thought that. I'm not one of them. But I don't think I need to disprove the existence of a soul to be justified in not believing in it. Unicorns aren't impossible, but for all that, I'm justified in thinking there aren't any. Why? Because all the evidence for unicorns just doesn't add up to very convincing case. Souls are not impossible, but for all that, I think I'm justified in believing there aren't any. Why? Because when you look for the--look at the arguments that have been offered to try to convince us of the existence of souls, those arguments just aren't very compelling, or so it seems to me. So, from this point on out, I'll be assuming the physicalist view is correct, and will be thinking about the issues of death as they'd be understood from the physicalist point of view. |
YaleCourses_Philosophy_of_Death | 2_The_nature_of_persons_dualism_vs_physicalism.txt | Professor Shelly Kagan: The first question we want to discuss has to do with the possibility of my surviving my death. Is there life after death? Is there a possibility that I might still exist or survive after my death? Now at first glance--and in fact, I think, at second glance it's going to turn out to be true--you might think that the answer to this question would depend on two basic issues. Do I survive my death? Do we survive our deaths? You think, the first thing we have to get clear on is well what am I? What kind of a thing am I? Or generalizing, what kind of thing is a person? What are we made of? What are our parts? It seems plausible to think that before we could answer the question, "Do I survive?" we need to know how I'm built. And so the first thing we're going to spend a fair bit of time on is trying to get clear on what's a person? What are the fundamental building blocks of a person? The second question that you might think we'd want to get clear on is, "What's the idea, or what's the concept, of surviving?" Before we ask, "Do I survive?" we need to get clear on "What am I?" and "What is it to survive?" What is it for something that exists in the future to be me? Now this question can be discussed philosophically in quite general terms. What's the nature of persistence of identity over time? But since we're especially interested in beings like us, people, this topic, this sub-specialized version of the question of identity, gets discussed under the rubric of the topic, personal identity. What's the key or the nature or the basis of personal identity? As we might put it: What is it for somebody who's here next week to be the same person as me? What's the nature of personal identity? So, as I say, at first glance you might think to get clear on the answer, "Do I or might I or could I survive my death?" we need to know, what am I? What's a person? What's the metaphysical composition of people, on the one hand? And we need to get clear on the nature of identity or persistence or, more specifically, personal identity. Now as I say, I believe that when push comes to shove, we do need to get clear about both of those questions and so that's going to take the first several weeks of the class. We're going to spend a couple of weeks talking about, "What's a person?" And then we're going to spend several weeks, or at least a week or so, talking about the nature of personal identity. But before we can even get started, there's a question, really an objection to the whole enterprise. So we're about to spend a lot of time asking the philosophical question: Is there life after death? Could there be life after death? Might I survive my death? But there's a philosophical objection to the entire question. And the objection is fairly simple. It says the whole question is misconceived. It's based on a confusion. Once we see the confusion, we can see what the answer to our question is. Could I survive my death? The answer has got to be--this is what the objection says--the answer has got to be, obviously not. All right, so here's the objection. I should mention that the very first reading that you're going to be doing is a couple of pages from Jay Rosenberg, a contemporary philosopher. He gives us a version of this objection. So I'll give you one version. You'll have another version in your readings. The objection basically says: What does it mean to say that somebody's died? We're asking, "Is there life after death?" What does it mean to say that somebody has died? Well a natural definition of death might be something like the end of life. So then, if that's right, then to ask, "Is there life after death?" is just asking, "Is there life after the end of life?" The answer to that ought to be pretty obvious. Well, obviously, the answer to that is no. After all, if we're saying once you've run out of life, is there any more life? Well, duh! That's like asking, when I've eaten up all the food on my plate, is there any food left on my plate? Or what happens in the movie after the movie ends? These are stupid questions, because once you understand what they're asking, the answer is just built in. It follows trivially. So although it has seemed to people over the ages that the question, "Is there life after death?" is one of the great mysteries, one of the great philosophical things to ponder, the objection says that's a kind of illusion. In fact, once you think about it, and not all that long, you can see the answer's got to be no. There couldn't possibly be life after death. There couldn't possibly be life after the end of life. Or suppose we ask the question in a slightly different way. Might I survive my death? Well what does the word "survive" mean? Well, survive is something like we say that somebody's a survivor if something's happened and they haven't died. They're still alive. When there's a car accident, you ask, so-and-so died, so-and-so survived. This person survived. To say that they survived is just saying that they're still alive. So, "Might I survive my death?" is like asking, "Might I still be alive after"--well what's death? Death is the end of life. So-- "might I still be alive after I've stopped living? Might I be one of the people who didn't die when I died?" Gosh, the answer to that is, again, duh! No. You couldn't possibly survive your death, given the very definition. It should remind us of--at least it reminds me of this joke that you probably told. It seemed hysterical when you were seven. The plane crashes exactly on the border of Canada and the United States. Exactly on the border. There's dead people everywhere. Where do they bury the survivors? The answer is: You don't bury the survivors. So when you're seven you think, "I don't know. Do they bury them in Canada? Do they bury them in America?" The answer is: You don't bury the survivors, because survivors are people that haven't yet died. So, "Can I survive my death?" is like asking, "Could I not have yet died after…?" The answer is, of course, you have to have died if you died and you haven't survived if you've died. So the question can't even get off the ground. That, at least, is how the objection goes. Now I don't mean to be utterly dismissive of the objection. That's why I spent a couple of minutes trying to spell it out. But I think there's a way to respond to it. We just have to get clearer about what precisely the question is that we're trying to ask. This is something that Rosenberg tries to get clear on as well. So here's my attempt to make the question both a bit more precise, and a question that's an open question. A question we can legitimately raise. Well, now as you will hear on several occasions over the course of the semester, I'm a philosopher. What that means is I don't really know a whole lot of facts. So I'm about to tell you a story where I wish I knew the facts. I don't know the facts. If I could really do it right, I'd now open the door and bring in our guest physiologist, who would then provide the facts that I'm--what I'm about to go is "blah, blah, blah." But we have the physiologist come in and he'd actually tell us these things. I don't know them. I don't have that person. But take a look at what happens when a body dies. Now, no doubt, you can kill people in a lot of different ways. You can poison them, you can strangle them, you can shoot them in the heart. The causal paths that result in death may start different, but I presume that they converge and you end up having a set of events take place. Now what are those events? This is exactly where I don't really know the details, but I take it it's something like: because of whatever the original input was, eventually the blood's no longer circulating and oxygen isn't making its way around the body. So the brain becomes oxygen-starved. Because of the lack of oxygen getting to the cells, the cells are no longer able to carry on their various metabolic processes. Because of this, they can't repair the various kinds of damage they need, or create the amino acids and proteins they need. So as decay begins to set in and the cell structures begin to break down, they don't get repaired as they would normally do, and so eventually have breakdown of the crucial cell structure and boom, the body's dead. Now as I say, I don't really know whether that's accurate, the little rough story I just told, but some story like that is probably right. And in typical philosophical fashion, I've drawn that story for you up here on the board. So the events that I don't really know the details of, we can just call B_1, B_2, B_3, up through B_n. Before B_1 begins, you've got the body working, functioning, in its bodily way--respirating, reproducing the cells, and so forth and so on. And at the end of the process, by B_n, the body's dead. B for bodily. B_1 through B_n; that's what death is. At least, that's what death of the body is. As I say, it's the sort of thing that somebody from the medical school or a biologist or a physiologist or something could describe for us. So here's the question then. Suppose we call that process "death of the body." Call what has occurred by the end of that sequence of events, "bodily death." Now here's a question that we can still ask, at least it looks as though we can still ask it. Might I, or do I, still exist after the death of my body? Might I still exist after bodily death? I don't mean to suggest in any way that we yet know the answer to that question, but at least that's a question that it seems as though we can coherently raise. There's no obvious contradiction in asking: Might I still exist after the death of my body? The answer could turn out to be no. But at least it's not obviously no. If the answer turns out to be no, it's going to take some sustained argument to settle it one way or the other. The answer could turn out to be yes, for all we know at this point. This just brings us back to the thought that whether or not I could still exist after the death of my body looks like it should depend on what I am. So in a minute, that's the question that I'm going to turn to. But it's a bit cumbersome to constantly be asking: Might I still exist after the death of my body? So no harm is done, once we've clarified the question that we're trying to ask, if we summarize that question in a bit of a jargon or slogan. We say, instead of asking: Might I survive? Or: Might I continue to exist after the death of my body? --you might put it this way. You might say for short: Will I survive the death of my body? No harm done. Or: Will I survive my death? Because what we were just stipulating we mean when we talk about my death in the context of this question is the death of my body. No harm done. We can just say for short: Will I survive my death or might I survive my death? For that matter, no serious harm done if we ask: Is there life after death? As long as we understand that what we're not asking about there is life of my body. Just another familiar way of trying to ask: Will I still be around after my death? Will I still exist after my death? So I think there's a perfectly legitimate question and that's the question we now want to turn to. As I said, it looks as though to answer the question, "Could I continue to exist after the death of my body?"--"Is there life after death?" "Could I survive my death?" for short--to answer that question, we need to get clearer about: What exactly is it for something to be me? That's a question we'll turn to in a couple of weeks. First, we've got to get clearer about: What am I? What kind of an entity am I? What am I made of? In philosophical jargon, this is a question from metaphysics. So we're asking the metaphysical question: What kind of a thing is a person? It seems plausible to think that whether or not a person can survive or continue to exist after the death of his or her body should depend on how he's built, what he's made of, what his or her parts are. So, let me sketch for you two basic positions on this question. What is a person? Two basic positions. They're both, I imagine, fairly familiar. What we're going to have to do is try to decide between them. They're not the only possible positions on the question of the metaphysics of the person. But they're, I think, the two most prominent positions and definitely the ones most worth taking seriously for our purposes. So, first possible position is this. A person is a combination of a body and something else--a mind. But the crucial thing about this first view that we want to talk about is that the mind is thought of as something separate from, and distinct from, the body. To use a common enough word, it's a soul. So people are, or people have, or people consist of, bodies and souls. The soul is something, as I say, distinct from the body. I take it the idea of the body is a familiar one. It's this lump of flesh and bone and muscle that's sitting here in front of you and that each one of you sort of drags around with you. It's the sort of thing that we can put on a scale and prod with a stick and the biologists can study, presumably made up of various kinds of molecules, atoms and so forth. So we've got the body. But on this first view, we also have something that's not body. Something that's not a material object. Something that's not composed of molecules and atoms. It's a soul. It's the house of, or the seat of, or the basis of, consciousness and thinking, perhaps personality. But the crucial point for this view is that the proper metaphysical understanding of the mind is to think of it in nonphysical terms, nonmaterial terms. That, as I say, is the first basic view. I'm going to say more about that view, a fair bit more about that view, over the next couple of weeks. First, let me sketch the other basic view. So this first view we can call "the dualist view." Dualist, of course, because there's two basic components--the body and the soul. Although I may occasionally slip, I'm going to try to preserve the word "soul." When I use the word "soul," I'm going to have in mind this dualist view according to which the soul is something immaterial, nonphysical. Some other kind--the body is a material substance. The soul is an immaterial substance. That's the dualist view. The alternative view that we're going to consider is not dualist, but monist. It says there's one basic kind of thing and only one basic kind of thing. There are bodies. So what's a person? A person is just a certain kind of material object. A person is just a body. Of course, it's a very fancy material object. It's a very amazing material object. That's what this second view says. The person is a body that can do things that most other material objects can't do. So on the monist view--which we'll call "physicalism," because it says that what people just are, are these physical objects--on the physicalist view, a person is just a body that can…now you fill in the blank. You point out the kinds of things that we can do. We can talk. We can think. We can sing. We can write poetry. We can fall in love. We can be afraid. We can make plans. We can discover things about the universe. According to the physicalist view, a person is just a body that can do all of those things: can reflect, can be rational, can communicate, can make plans, can fall in love, can write poetry. That's the physicalist view. As I say, we've got two basic positions. There's the dualist view--people are bodies and souls. And there's the physicalist view, according to which there are no souls. There are no immaterial objects like that. There are only bodies, though when you've got a functioning body like ours, so the physicalist says, these bodies can do some pretty amazing things. The kind of things that we all know people can do. Two basic views. From a logical point of view, I suppose you might have a third possible view. If we've got the monist who says there's bodies but there's no souls, you could imagine somebody who says there are souls but there are no bodies. This would roughly be a view according to which there are minds, but there aren't really physical objects. Physical objects are a kind of illusion, perhaps, that we fall into. Or thinking about them in materialistic terms might be greatly confused or mistaken. This view is sometimes known in philosophy as idealism: all that exists, are minds and their ideas. Physical objects is just a way of talking about the ideas the mind has or something like that. Idealism is a position that's got a very long history in philosophy and for many classes would be worth taking a fair bit of time to consider more carefully. But for our purposes, I think it's not a contender. So I'm just going to put it aside. The positions that I'm going to--and there are other possibilities as well. There are views where mind and body are just two different ways of looking at the same underlying reality where the underlying reality is neither physical nor mental. That view's also worth taking seriously in a metaphysics class, but for our purposes, I mention it and put it aside. The two views we are going to focus on are, on the one hand, the dualist view--people have souls as well as bodies--and the physicalist view--all we have, all we are, are bodies. Let me say something more then about the dualist position. According to the dualist, the mind is this immaterial substance and we could call it by different names. No harm would be done if we call it a mind, though the reason I will typically talk about a soul is to try to flag the crucial point of the dualist view. The mind is based in, or just is something nonphysical, something nonmaterial…The soul can direct and give orders to the body, on the one hand. On the other hand, the body generates input that eventually gets sensed or felt by the soul. You take a pin and you stick it through my flesh of my body and I feel pain in my soul, in my mind. So, two-way interaction. As always with philosophy, there's more complicated versions of dualism where maybe the interaction doesn't work both ways, but let's just limit ourselves to good, old-fashioned, two-way interactionist dualism. So my mind controls my body. My body can affect my mind in various ways. But for all that, they're separate things. Still there's this very tight connection. We sometimes put it: the soul is in the body, though talking about spatial locations here may be somewhat metaphorically intended. It's not as though we think that if you start opening up the body you'd finally find the particular spot. Here's the place where the soul is located. Though it does seem, from this dualist perspective, as though souls are located, I'm sort of viewing the world from here. Just like each of you is viewing the world from a particular location. So maybe your soul is located, more or less, in the vicinity of your body. Crucial point, of course, the attraction of the dualist view, from our point of view, is that if there's a soul as well as the body, and the soul is something immaterial, then when the body dies, when we have B_1 through B_n and the death of the body occurs. So at the end of B_n, the body stops repairing itself. Decay sets in. We all know the sad story. The worms crawl in, the worms crawl out. At the end of the day--well it maybe it takes longer than a day--the body has decomposed. Yes, all that bespeaks the end of the body. But if the soul is something immaterial, then that could continue to exist, even after the destruction of the body. That's the attraction, at least one of the attractions, of the dualist view. The belief in the soul gives you something to continue to exist after the end of your body. So what's death? Well, if normally there's this super tight connection between my soul and my body, death might be the severing of that connection. So the body breaks and no longer is able to give input up to the soul. The soul is no longer able to control the body and make it move around. But for all that, the soul might continue to exist. And so at least the possibility that I'll survive my death is one worth taking very, very seriously if we are dualists. A couple of things to point out about this view. One is I've been talking as though a person is a combination, kind of a soul and body sandwich. So a person has two basic building blocks. The bodily part and the soul part. It's natural to talk that way, but if we want belief in the soul to help us hold out the possibility at least that there might be life after death, then I think we need to actually say that strictly speaking, it's not that a person is a soul plus a body. Strictly speaking, I think we need to say the person just is the soul. After all, if the person is the combination, if the person is the pair, soul plus the body, destroy the body, you've destroyed the pair. If the person is the pair and the pair no longer exists, the person no longer exists. So if we want belief in a soul to help us leave open the door to the possibility that I survive the destruction of my body, it had better not be that the body is an essential part of me. It's simpler, more straightforward to say instead, "What I am strictly speaking is a soul." As long as the soul exists, I exist. Of course, my soul, me, I, have a very tight connection to a particular body. But still, you could, in principle, destroy the body without destroying me. Look, I have a particularly close connection to the house I live in. But for all that, you can destroy my house without destroying me. So that's I think the position that we ought to ascribe to the dualist. The person is, strictly speaking, the soul. The soul has a very intimate connection with the body, but the person is not the soul and the body. The person is just the soul. So even if that intimate connection gets destroyed, the person, the soul, could continue to exist. The second point to clear up is that there's really three different issues that might interest us. One, metaphysically, are bodies and souls distinct? Is the mind to be understood in terms of this immaterial object, the soul? So are there two kinds of things? That's the first question. Are souls and bodies distinct? Second question, though, is: Does the soul, even if it exists, survive the destruction of the body? It could be something separate without surviving. That's why I've tried to say if there are souls, at least that opens the door to the possibility that we will survive our death. But, it doesn't guarantee it, because absent further argumentation, there's no guarantee that the soul survives the death of the body. Even if it's separate, it could be that it gets killed at the very same time or destroyed at the very same time that the body's being destroyed. Maybe when these physical processes, B_1 through B_n, take place, they set into motion--remember, after all we're interactionist dualists. There's this very tight causal connection between the body and the mind and the soul and the body, the body and the soul. Just like when you prick my body, that bodily process sets up certain things taking place in my soul. Maybe when B_1 through B_n take place, they set up some other processes in my soul. Call them S_1 through S_n. And maybe S_1 through S_n results in the destruction of my soul. So simultaneously with my body dying, my soul dies. Okay, this one's going to be a little bit trickier to draw. The first part, S_1..., that's easy. S_n. The question is: How do I draw the soul? I don't really know . So the mere fact that we decide, if we do ultimately decide that there is a soul, something nonphysical, separate and distinct from the body, doesn't guarantee that we survive our physical death. That's going to be a separate question we'll have to turn to. The first question's going to be: Are there any souls? Next question is going to have to be: If there are, do we have any good reason to think that they survive the death of the body? Third question that might interest us, that does interest us, is this: If it survives, how long does it survive? Does the soul continue to exist after the death of the body? Does it continue to exist forever? Are we immortal? Most of us would like that to be true. We want there to be souls so that we can be immortal. And so the question's got to be not only, is the soul distinct? Does it survive the death of my body? But does it continue to exist forever? Those questions--hang on one second--are ones that especially interest Plato. So in about a week or so we'll start reading Plato's Phaedo. The purpose of that dialogue, of that philosophical work, is to argue for the immortality of the soul. That's a question we'll be turning to. Yeah? Student: [inaudible] Professor Shelly Kagan: Great. So the question is this. If the very idea of soul that we're working with here under the dualist picture is the soul as an immaterial substance, it's not made of ordinary atomic matter. If the soul is immaterial, doesn't it follow automatically, trivially, that the soul can't be destroyed by a material process? After all, there was death of the body, B_1 through B_n. That's a material process, a physical process. Doesn't it follow that a soul, an immaterial entity, can't be destroyed by a material, physical process? That's a great question. What I want to say is, the short answer for now is, I don't think it follows automatically. It doesn't follow trivially. It may follow. Plato's actually going to give us some arguments for pretty much that same claim. Plato's going to argue once we understand the sort of metaphysical nature of the soul, we'll see why it couldn't be destroyed. That's going to take some fancy arguments. The reason I think it doesn't follow trivially is because, remember, I said we're dealing with interactionist dualism. We've already admitted that bodies are able to affect the soul, right? The body is having all sorts of light bounce off my eyes of various wavelengths. And because of that my soul is having various visual sensations about the number of people in front of me, colors, and so forth and so on. I gave the example of pricking my body. That's a physical process that causes some sorts of changes in the mental processes occurring in my soul. Once we've admitted that on this kind of dualist picture the material body can influence what happens in the immaterial soul, then it doesn't seem that we have any grounds for shutting the door to the possibility that the right physical process, B_1 through B_n, might set up this horrible mental, soul process, S_1 through S_n, resulting in the destruction of the soul. It's a possibility. It's going to take more arguments to rule it out. Yeah? Student: [inaudible] Professor Shelly Kagan: Yeah, another great question. The question was: I said it seems plausible to say my soul is located, more or less, here because I seem to view the world from here. But maybe that's not right. Maybe we shouldn't talk about the location of the soul at all. After all, if the soul is an immaterial object, can immaterial objects have locations? I don't know. The short answer is I don't know. I know very little about how immaterial objects are supposed to work. So although I'm trying to sketch the dualist position, as I explained on Tuesday, I don't myself believe in souls. I don't actually think that the dualist view is correct. You might say, I'll leave that problem--are souls spatially located or not--to be worked out by those who believe in it. For our purposes, I think it doesn't really matter. If you want to say souls have a location, where are they located? They're located more or less where my body is. At least, as long as my body's working. Maybe at death the soul gets liberated from the body and is able to wander more freely. Sometimes people talk about, in fact we'll be reading about this, out-of-body experiences. And so maybe during those unusual times the soul wanders from the body and comes back to it. Or, alternatively, maybe the soul doesn't have any location at all. Maybe that's just an illusion created by the fact that I'm getting this visual input from my body. My body certainly has a location. Maybe the right way--imagine somebody who was in a room with remote control television setup and so forth and so on. And he's seeing what's happening in Chicago, even though he's sitting in a room in New Haven. Well, you could understand why he might fall into the trap of thinking of himself as located in Chicago with all the visual inputs coming from Chicago. So maybe that's how it works with the soul. We get lulled into thinking that we are where our bodies are. But that's really a metaphysical illusion. I don't really know. For our purposes, I think it's not crucial. Though it's a great question, but I'm not going to try to pursue it any further. All right. So one question: Is there a soul? Second question: Does it survive the death of our body? Third question: If it does, does it live forever? Does it continue to exist forever? Is the soul immortal? We will initially think about the first question: Do we have any good reason to believe in souls at all? And only after a while will we turn to the second and third question: Does it survive and, more particularly, is it immortal? That's the first basic view about the nature of a person. A person has a soul, something immaterial and not a body. I take it that the view is a familiar one. Many of you probably believe in it. Those of you who don't believe in it have probably, at least, been tempted to believe in it. I'm sure you all do know people who believe in it. It's a very familiar picture. But, of course, the question we're going to have to ask ourselves is: Is it right? Is there reasons to believe it's correct? Turn now to the second basic view, the physicalist view, according to which a person is just a body. This is a materialist view. People are just material objects, the sorts of things biologists poke and prod and study. It's important--I think this is the crucial point--that when we say a person is just a body, we don't understand that to mean--the physicalist doesn't mean that as--a person is just any old body. It's not as though there aren't important differences between different physical objects. Some physical objects can do things of a far more interesting sort than other physical objects. Here's a piece of chalk. It's a physical object. It's just a body. What can it do? Well, not a whole lot. I can write on the board with it. I can break it in two. You let go of it, it drops down. Not a very interesting physical body. Here's a cell phone. It's just a body. It's not the most interesting physical object in the world, but it's a whole lot more interesting than a piece of chalk. It can do all sorts of things a piece of chalk can't do. If the physicalist is right, then here's another physical object for you--me, Shelly Kagan. I'm a pretty impressive physical object. Now arrogant as I may be, I don't mean to suggest I'm any more impressive than you guys are. Each one of us, according to the physicalist, is just a body that can do some amazing things. We are bodies that can think. We are bodies that can plan. We are bodies that can reason. We are bodies that can feel. We are bodies that can be afraid and be creative and have dreams and aspirations. We are bodies that can communicate with each other. We are bodies that are--well, here's a word for it: We're bodies that are people. But on the physicalist view, a person is just a body. And that's where we'll take it up next time. |
YaleCourses_Philosophy_of_Death | 17_The_badness_of_death_Part_II_The_deprivation_account.txt | Professor Shelly Kagan: Last time we made the turn from metaphysics to value theory. We started asking about what it is about death that makes it bad. The first aspect of the badness of death that we talked about was the fact that when somebody dies, that's hard on the rest of us. We're left behind having to cope with the loss of this person that we love. Nonetheless, it seems likely that if we want to get clear about the central badness of death, it can't be a matter of the loss for those who remain behind, but rather the loss, the badness of death, for the person who dies. That, at any rate, is what I want to focus on from here on out. What exactly is it about my death, or the fact that I'm going to die, that makes that bad for me? Now, I want to get clear about precisely what it is we want to focus in on here. Now, one thing that could be bad, obviously, is the process of dying could be a painful one. It might be, for example, that I get ripped to pieces by Bengali tigers. And if so, then the actual process of dying would be horrible. It would be painful. And clearly it makes sense to talk about the process of dying as something that could potentially be bad for me. Although similarly, I might die in my sleep, in which case the process of dying would not be bad for me. At any rate, I take it that most of us, although we might have some passing concern about the possibility that our process of dying might be a painful one, that's not, again, the central thing we're concerned about when we face the fact that we're going to die. It's also true, of course, that many of us find--;here, right now, while we're not actually dying--the prospect of dying to be unpleasant. So, one of the things that's bad about my death for me is that right now I've got some unhappy thoughts as I anticipate the fact that I'm going to die. But again, that can't be the central thing that's bad about death, because the prospect of my death--it makes sense for that to be a painful one or an unpleasant one, only given the further claim that death itself is bad for me. Having fear or anxiety or concern or regret or anguish or whatever it is that maybe I have now about the fact that I'm going to die, piggybacks on the logically prior thought that death itself is bad for me. If it didn't piggyback in that way, it wouldn't make any sense to have fear or anxiety or dread or anguish or whatever it is that I may have now. I mean, suppose I said to you, "Tomorrow something's going to happen to you and that thing is going to be simply fantastic, absolutely incredible, absolutely wonderful." And you said, "Well, I believe you and I have to tell you, I'm just filled with dread and foreboding in thinking about it." That wouldn't make any sense at all. It makes sense to be filled with dread or foreboding or what have you only if the thing you're looking forward to, anticipating, is itself bad. Maybe, for example, it makes sense to dread going to the dentist, if you believe that being at the dentist is a painful, unpleasant experience. But if being at the dentist isn't itself unpleasant, it doesn't make sense to dread it in anticipation. So again, if we're thinking about the central badness of death, it seems to me that we've got to focus on my being dead. What is it about my being dead that's bad for me? Now, if we pose that question, it seems as though the answer should be simple and straightforward. When I'm dead, I won't exist. Now previously, in the first part of the class, we spent some time saying that, look, on certain views, there'll be a period of time in which you might be dead, but your body might still be alive. Or you might be dead, but even though your body still exists, it's not alive, but you exist as a corpse. Put all that aside. Go to the period beyond any of that murky stuff in the short-term and just, for simplicity, let's suppose with the physicalists that once I die, I cease to exist. All right. So, don't we have the answer to what's bad about death right there? When I'm dead, I won't exist. Isn't that the straightforward explanation about why death is bad? Now, what I want to say, in effect, is this. I do think the fact that I won't exist does provide the key to getting clear about how and why death is bad. But I don't think it's quite straightforward. I think, as we'll see, it actually takes some work to spell out exactly how death, how nonexistence, could be bad for me. And even having done that, there'll be some puzzles that remain that we'll be turning to in a little while. So, the basic idea seems to be straightforward enough. When I'm dead, I won't exist. Isn't it clear that nonexistence is bad for me? Well, immediately you get an objection. You say, how could nonexistence be bad for me? After all, the whole point of nonexistence is you don't exist. How could anything be bad for you when you don't exist? Isn't there a kind of logical requirement that for something to be bad for you, you've got to be around to receive that bad thing? A headache, for example, can be bad for you. But of course, you exist during the headache. Headaches couldn't be bad for people who don't exist. They can't experience or have or receive headaches. How could anything be bad for you when you don't exist? And in particular, then, how could nonexistence be bad for you when you don't exist? So it's not, as I say, altogether straightforward to see how the answer "Death is bad for me, because when I'm dead I don't exist," how that answers the problem, as opposed to simply focusing our attention on the problem. How can nonexistence be bad for me? The answer to this objection, I think, is to be found in drawing a distinction between two different ways in which something can be bad for me. On the one hand, something can be bad for me, we might say, in an absolute, robust, intrinsic sense. Take a headache, again, or some other kind of pain--stubbing your toe or getting stabbed or whatever it is, being tortured. Pain is intrinsically bad. It's bad in its own right. It's something we want to avoid for its own sake. And those--;Normally, things that are bad for you are bad intrinsically. They're bad by virtue of their very nature. There's something about the way they are that you don't want those that are bad in their own right. But there's another way of something being bad for you that it's easy to overlook. Something can be bad comparatively. Something could be bad because of what you're not getting while you get this bad thing. It could be what the economists call bad by virtue of "the opportunity costs." It's not that it's intrinsically bad; it's bad because while you're doing this, you're not getting something better. How could that be? Let's have a simple example. Suppose that I stay home and watch something on TV--Deal or No Deal. I watch this on TV and I have a good enough time. How could that be bad for me? Well, in terms of the first notion of bad, something being intrinsically bad, it's not bad. It's a pleasant enough way to spend a half an hour, or however long the show is on. On the other hand, suppose what I could be doing instead of watching a half an hour of television is being at a really great party. Then we might say, the fact that I'm stuck home watching television is bad for me in this comparative sense. It's not that it's, in itself, an unpleasant way to spend some time; it's just that there's a better way to spend time that I could be doing, in principle at least. If only I'd gone. If only I'd been invited. If only I remembered, what have you. And because I'm foregoing that better good, there's something bad, comparatively speaking, about the fact that I'm stuck at home watching TV. There's a lack of the better good. A lack is not intrinsically bad, but it's still a kind of bad in this second sense. To be lacking a good is, itself, bad for me. Similarly, suppose I hold out two envelopes and I say, "Pick one." And you open up the first one, you pick the first one, and you open it up and you say, "Hey look, ten bucks! Isn't that good for me?" Well, of course, ten bucks is intrinsically good. Anyway, well, it's not intrinsically good, it's only good as a means to buy something. But it's sort of good. It's worth having for its own right, because of what it can get you. But if unbeknownst to you, the other envelope had $1,000 in it, then we can say, "Look, it's bad for you that you picked the first envelope." Bad in what sense? Because you would have been better off, had you picked the second envelope. You would have been having more good, or a greater amount of good. Well, nonexistence can't be bad for me in our first sense. It can't be that nonexistence is intrinsically bad, worth avoiding for its own sake. That would only make sense if nonexistence was somehow, for example, painful. But when you don't exist, you have no painful experiences. There's nothing about nonexistence in and of itself that makes us want to avoid it. Nonexistence is only bad for me in this comparative sense, because of the lack. When I don't exist, I'm lacking stuff. What am I lacking? Well, of course, what I'm lacking is life and more particularly still, the good things that life can give me. So, nonexistence is bad by virtue of the opportunity costs that are involved. Famously, W.C. Fields on his tombstone says, "Personally, I'd rather be in Philadelphia." What's bad about being dead is you don't get to experience and enjoy any longer the various good things that life would offer us. So nonexistence does point to the key aspect about death. Why is death bad? Because when I'm dead I don't exist. But if we ask, why is and how can it be the case that nonexistence is bad? the answer is, because of the lack of the good things in life. Because when I don't exist, I am not getting the things that I could have otherwise gotten, if only I were still alive. Death is bad because it deprives me of the good things in life. This account is nowadays known as the deprivation account of the evil or badness of death, for obvious reasons, right? The key thought is, the central bad about death, about nonexistence, is that it deprives you of the goods of life you might otherwise be getting. That's the deprivation account. And it seems to me that the deprivation account basically has it right. Eventually, I'll go on to argue that there are other aspects of death that may also contribute to its badness, aspects above and beyond the one that gets focused on by the deprivation account. But still, it seems to me the deprivation account points us correctly to the central thing about death that's bad. What's most importantly bad about the fact that I'll be dead is the fact that when I'm dead, I won't be getting the good things in life. I'll be deprived of them. That's the badness of death according to the deprivation account. Now, if we accept the deprivation account, if we try to accept the deprivation account, we face some further philosophical puzzles. Puzzles that many people have thought are sufficiently overwhelming that we, despite the initial plausibility of the deprivation account, have to give it up. First objection is this. Look, if something is true--a quite general point, it seems, about metaphysics--if something is true, there's got to be a time when it's true. If I make some claim about a fact, there's got to be a time when that fact is true. Here's a fact. Shelly's lecturing to you now about the badness of death. When is that fact true? When was that fact true? Well, right now. Here's another fact. Shelly once lectured to you about the nature of personal identity. When was that fact true? Well, we can point to a period of perhaps a week or two last month when I was lecturing to you about personal identity. Things that are facts can be dated. All right. That seems right. But if it is right, then immediately we've got a puzzle. How could death be bad for me? If death was bad for me, that would be a fact. If my death is bad for me, that would be a fact and we'd ask, well, when is that fact true? We might say, well, it's not true now. Death isn't bad for me now. I'm not dead now. Maybe death is bad for me when I'm dead? But that seems very hard to believe. I mean, when I'm dead, I don't exist, right? How could anything be bad for me then? Surely you've got to exist. So, there's a puzzle about dating the badness of death. Now, it may be that this puzzle about time and the date of the badness of death is what Epicurus had in mind. There's a passage that I'm going to read to you in a moment from Epicurus. This passage has puzzled people, it has puzzled philosophers ever since. Epicurus seems to be putting his finger on something puzzling about death, though it's difficult to pin down exactly what it is that's bugging him. So we're going to try an interpretation or two. But first, here's the passage from Epicurus. "So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more." You see, it's not altogether clear what Epicurus is bothered by there, but one possible interpretation is this puzzle about the timing of the badness of death. Death can't be bad for me now, because I'm alive. Death can't be bad for me when I'm dead. I am no more; then how can things be bad for me then? But if death has no time at which it's bad for me, and if anything that's true, any fact has to have a time when it's true, then the purported fact that death is bad for me can't really be a fact. All right. How could we respond to this objection? Well, one way of course is to accept the objection and say, "You're right. Death isn't really bad for me." And some philosophers have indeed accepted that very conclusion, maybe Epicurus. Most of us want to say, "No, no. Death is bad for me." So we need a better answer to the, "Oh yeah? When is it bad for you?" objection. Two possible responses. One possible response would be to grab the bull by the horns and say, "Death is bad for me. Facts do have to be dated. Let me tell you when it's bad for me." The other possible response is to grab, as it were, the other horn and say, "You know, death is bad for me and I agree that I can't date it, but you're wrong to assume that all facts have to be datable. There are some things that are true that we can't put a date on." Let's start with the second. Could there be some things that are true that we can't put a date on? Well, here's one I think, maybe. Suppose that on Monday I shoot John. I wound him with the bullet that comes out of my gun. But it's not a wound directly into his heart. He simply starts bleeding. And he bleeds slowly. So he doesn't die on Monday. He's wounded and he's dying, but he doesn't die on Monday. On Tuesday, let's suppose that I have a heart attack and I die. John's still around--bleeding, but still around. On Wednesday, the loss of blood finally overtakes him and John dies. All right? So, I shoot him on Monday, I die on Tuesday, John dies on Wednesday. Now, I killed John. I take it we're all in agreement about that. If I hadn't shot him, he wouldn't be dead. I killed him. When did I kill him? Did I kill him on Monday, the day I shot him? That doesn't seem right. He's not dead on Monday. How could I kill him on Monday? Oh, he died on Wednesday. Did I kill him on Wednesday? Well, how could that be? I don't even exist on Wednesday. I died myself on Tuesday. How can I kill him after I'm dead? So I didn't kill him on Monday, didn't kill him--rather on Wednesday. I didn't kill him on Monday, didn't kill him on Wednesday when he dies. When did I kill him? Well, maybe the answer is there's no particular time at all when I killed him. But for all that, it's true that I killed him. What makes it true that I killed him? What makes it true that I killed him is that on Monday I shot him and on Wednesday, he died from the wound. That's what makes it true. But when did I kill him? Maybe we can't date that. Suppose we can't. If we can't, then there are facts that you can't date, like the fact that I killed John. If there are facts that you can't date, maybe here's another one. My death is bad for me. When is that true? Can't date it, but for all that, maybe it's true. So maybe we shouldn't accept the assumption of the argument that all facts can be dated. Of course, the thought that all facts can be dated is a very powerful one, and no doubt, many of you are going to go home and start trying to come up with an adequate answer to the question, when exactly did Shelly kill John? And come up with an answer, maybe, that you can even accept. At any rate, maybe we should accept the thought that all facts can be dated. In which case, if we're going to want to insist that my death is bad for me, we'd better be able to come up with a date. Well, maybe we can. When would it be plausible to claim my death is bad for me? Well, not now. My death can't be bad for me now. I'm not dead. But it's not 100% clear that the other alternative isn't acceptable. Why not say, "My death is bad for me when I'm dead"? After all, when is a headache bad for me? When the headache is occurring. Now, according to the deprivation account, the badness of death consists in the fact that when you're dead, you are deprived of the goods of life. So when is death bad for you? During the time perhaps you're being deprived of the goods of life. Well, when are you deprived of the goods of life? When you're dead. When does the deprivation actually occur? When you're dead. So perhaps we should just say, "Well, you were right, Epicurus," if this was Epicurus' argument. "You were right, Epicurus. All facts have to be dated, but we can date the badness of death. My death is bad for me during the time I'm dead. Because during that time, I'm deprived of, I'm not getting, the good things in life that I would be getting if only I were still alive." Well, that's a possible response to the objection. But of course, it just immediately raises a further objection. How could it be that death is bad for me then? How could it be that death is bad for me when I don't exist? Surely, I have to exist in order for something to be bad for me. Or, for that matter, for something to be good for me. Don't you need to exist in order for something to be good or bad for you? Well, this points our way to a different possible interpretation of Epicurus' argument. The argument would be (A) something could be bad, or for that matter, good for you only if you exist; (B) when you're dead you don't exist; so (C) death can't be bad for you. Put that up on the board. (A) Something can be bad for you only if you exist. (B) When you're dead you don't exist. So, conclusion, (C) death can't be bad for you. Maybe that's the argument that Epicurus had in mind. Let's hear Epicurus'--the quote from Epicurus again. "So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more." Again, the passage from Epicurus isn't altogether clear, but maybe he's got in mind something like this argument. Maybe Epicurus thinks look, (A) something can be bad for you only if you exist; (B) when you're dead you don't exist; so (C) death can't be bad for you. Well, what should we say? It's pretty clear that (B) is true. When you're dead you don't exist. And so the conclusion, (C) death can't be bad for you, looks like it's going to follow, once we accept (A). Call (A) the existence requirement. Something can be bad or, for that matter, good for you, only if you exist. That's the existence requirement for bads and goods. If we accept the existence requirement, it looks as though we have to accept the conclusion, death can't be bad for you. What should we say? Maybe what we should say is, reject the existence requirement. In the ordinary case, pains, being blind, being crippled, what have you, losing your job--in the ordinary case, things are bad for you when you exist. In the ordinary case, in order to receive bads, you've got to exist. But perhaps that's only the ordinary case; it's not all cases. Perhaps we should say, look, for certain kinds of bads you don't need to even exist in order for those things to be bad for you. What kind of bads could be like that? Well, of course, deprivation bads would be exactly like that. To lack something, you don't need to exist. Indeed, the very fact that you don't exist might provide the very explanation as to why you've got the deprivation, why you've got the lack. Not all lacks might be like that, right? Remember the television case. You existed while you were being deprived of the great party. You existed while you were getting the mere $10 instead of the $1,000. So, sometimes deprivations coincide with existence. But the crucial point about deprivations is you don't even need to so much as exist in order to be deprived of something. Nonexistence guarantees that you're deprived of something. So perhaps we should just reject the existence requirement. Perhaps we should say, when we're talking about lacks, when we're talking about deprivations, (A) is wrong. We should reject the existence requirement. Something can be bad for you even if you don't exist. The existence requirement is false. Well, that would be a possible way to respond to this second possible interpretation of Epicurus' argument. It would be a way to retain the thought that I take it we want to have, that most of us share, at least, that death is bad. We'd be able to retain that thought by rejecting the existence requirement. Well, easy to say that, but there are some implications of rejecting the existence requirement that may be rather hard to swallow. Think about exactly what it's saying. It's saying something, for example death, nonexistence, can be bad for somebody even though they don't exist. That's why my death can be bad for me even though I won't exist. But if death can be bad for somebody even though they don't exist, then death could be bad for somebody, that is to say, nonexistence could be bad for somebody, who never exists. Take somebody who is a possible person, but never actually gets born. It's sort of hard to think about somebody like that. So let's try to get at least a little bit more concrete. I need two volunteers. I need a male volunteer from the audience. Good. Okay, you'll be the male volunteer. And I need a female volunteer from the audience. Come on, it won't hurt. I need a female volunteer. Okay. What I'd like you to do after class is go have sex and have a baby. Okay. Now, let me just suppose that this isn't actually going to happen. Sorry. Or sorry, I don't know. Let's consider, though, the possibility never to be actualized, the possibility that they would have sex and have a baby. His sperm joined with her egg, form a fertilized egg. The fertilized egg develops into a fetus. The fetus is eventually born. It's the fetus that we got by mixing egg 37 with sperm 4,000,309. There's a person that could have been born. But let's suppose, never does get born. That particular person who could have been born, let's call Larry. Okay. Larry is a possible person. It could happen, but won't happen. It could exist, but won't exist. Now, how many of us feel sorry for Larry? Probably nobody. After all, Larry never even exists. How can we feel sorry for Larry? Now, that made perfect sense when we accepted the existence requirement, (A), something can be bad for you only if you exist. Since Larry never exists, nothing can be bad for Larry. But once we give up on the existence requirement, once we say something can be bad for you even if you never exist, then we no longer have any grounds for withholding our sympathy from Larry. We can say, "Oh my gosh! Think of all the goods in life that Larry would have had, if only he'd been born. But he never is born, so he's deprived of all those goods." And if death is bad for me, by virtue of being deprived of the goods of life, then nonexistence is bad for Larry, by virtue of his being deprived of all the goods of life. I've got it bad. I'm going to die. Larry's got it worse. We should really feel much sorrier for Larry. But I bet none of you feels sorry for Larry, this never-to-be-born-at-all person. Now, it's important in thinking about this, that we not slip back into some version of the soul view, especially some version of the soul view where the souls are prior existents. You might imagine--there's a scene in Homer, I think, where some sort of sacrifice is being made and all the dead souls go hover around, longing to be alive again, to savor the food and taste and smells of life, right? If you've got this picture of the nonexistent, merely potentially possible but never-to-be born individuals as somehow really already existing in a kind of ghost-like state, wishing they were born, maybe you should feel sorry for them. But that's not what the story is at all on the physicalist picture that I'm assuming. Nonexistent people don't have a kind of spooky, wish-I-were-alive ghost-like existence. They just don't exist, full stop. So once we keep that in mind about Larry, it's very hard to feel sorry for him. Of course, look, since I've been going on about how he's deprived of all the good things in life, maybe some of you are feeling sorry for Larry. So it's worth getting clear about just what it would mean to take seriously the thought that it's bad for merely potential people never to be born. How many merely potential people are there? I want you to get a sense of just how many there are. Not just Larry, the unborn person that would exist if we mixed whatever it was, you know, egg 37 and sperm 4,000,029, whatever the number was. Not just Larry, who's a potential person who never gets born, that would have to be an object of our sympathy, there's a lot of merely potential, never-to-be-born people. How many? A lot. How many? Well, I once tried to calculate. Well, as you'll see, the calculation is utterly off the back of the envelope, sort of rough and completely inadequate in ways that I'll point out. But at least it'll give you a sense of just how many potential people there are. Let's start modestly and ask: How many possible people could we, the current generation, produce? Now as I say, I made this calculation some years ago. It doesn't really matter how inaccurate it's going to be. As we'll see, it's very rough, but it makes the point. How many people are there? How many possible people, rather, could there be? Well, suppose there were 5 billion people. Roughly half of them are men, half of them are women. What we want to know then is, how many possible people could the 2.5 billion men make altogether with the 2.5 billion women? The crucial point in thinking about this is to realize that every time you combine a different egg with a different sperm, you end up with a different person, right? If you combine an egg with a different sperm, you get a different genetic code that develops into a different person. You combine that sperm with a different egg, you get a different person. You know, if my parents had had sex five minutes earlier or five minutes later, presumably some other sperm would have joined with the egg. That would have been not me being born, but some sibling being born instead of me. Change the egg, change the sperm, you get a different person. So what we really want to know is, how many sperm-egg combinations are there with roughly 5 billion people in the world? Well, let's see. There's 2.5 billion women, billion women. How many eggs can a woman have? Well, fertile periods, round numbers, it's not really going to matter, precision, roughly 30 years, roughly 12 eggs a year. So that's how many eggs. Actually, I discovered some time after having done this calculation that the number of possible eggs is far greater. A woman actually ovulates and gives off this many eggs roughly during her fertile period. But there's many, many other cells, I gather, that could have developed into eggs. So that's a much, much larger number of potential eggs. But this will do. 30 years, 12 eggs a year. How many men? Roughly 2.5 billion men. Each man has a much longer period in which he's able to produce sperm. Let's just be round numbers here, 50 years. How many times a day can the man have sex? Well, certainly more than once, but let's be modest here and just say once a day. So that's 365 times a day--a year. 365 days a year. 365 days, I guess that should be. I wrote it too big. I don't have space left for the last number. Each time the man ejaculates, he gives off a lot of sperm. How much sperm? A lot. As it happens, I looked this up once. Round numbers, 40 million sperm each time the man ejaculates. So this last number has got to be times 40 million sperm. Okay, so we took all the men that exist now and all the women that exist now and ask: How many merely possible people? You know, most of these people are never going to be born, of course. But we're talking about possible people. How many possible people are there? There's 2.5 billion times 30 times 12 times 2.5 billion times 50 times 365 times 40 million. That equals--I'm going to round here. That equals approximately 1.5 million billion billion billion people. That's 1.5 x 10^(33). That's how many possible people we could have, roughly speaking, in the next generation, of which obviously a miniscule fraction are going to be born. There's--If you're going to feel sorry for Larry, you've got to feel sorry for every merely possible person. Every person who could have been born that never gets born. And there's 1.5 million billion billion billion such people, such possible people. And of course, the truth of the matter is, we barely scratched the surface here. Because now think of all those people and think about all the possible children they could have. We got this number starting with a mere 5 billion people. Imagine the number we would get if we then calculated how many possible grandchildren we could have. I don't mean that we could actually have all of those people at the same time, but for each one there is a possible person that could have existed. You quickly end up with more possible people than there are particles in the known universe. And that was just two generations, right? Three generations, you're going to have more. Four generations, you're going to have more. If we think about the number of possible people, people who could have existed but will never exist, the number just boggles the mind. And then, if we say we've gotten rid of the existence requirement and so things can be bad for you even if you never actually exist, then we have to say of each and every single one of those billions upon billions upon billions upon billions upon billions of possible people that it's a tragedy that they never get born, because they're deprived of the goods of life. If we do away with the existence requirement, then the tragedy of the unborn possible people is a moral tragedy that mere--that just staggers the mind. The worst possible moral horrors of human history don't begin to even be in the same ballpark as the moral horror of the loss, the deprivation for all of these unborn possible people. Now, I don't know about you, when I think about it, all I can say is it doesn't strike me as being a moral catastrophe. I don't feel anguish and sorrow and dismay at the loss, at the lack, at the deprivation for the untold billion, billion, billion, billions. But if we give up the existence requirement and explain the badness of my death via the deprivation account, we do have to say this is a moral tragedy, the fact that the billions upon billions are never born. Well, if we're not prepared to say that's a moral tragedy, well, we could avoid that by going back to the existence requirement. But of course, if we do go back to the existence requirement, then we're back with Epicurus' argument. Something can be bad for you only if you exist. When you're dead, you don't exist. So, (C), death can't be bad for you. And now we've really got ourselves in a philosophical pickle, don't we? If I accept the existence requirement, we've got an argument that says death isn't bad for me, which is really rather surprising. I can keep the claim that death is bad for me by giving up the existence requirement. But if I give up the existence requirement, I've got to say it's a tragedy that Larry and the untold billions, billions, billions, billions--it's a tragedy that they're deprived of life as well. And that seems unacceptable. What should we do? What should we say? The suggestion is that the key here is to think about the claim that I'm using deprived in two different senses. That when we worry about my death, I'm losing something--namely, life--that I've had. But in the case of Larry and the untold billions, they never had life. And so they're not deprived of it in that same sense. I think it's a very promising suggestion. And indeed, I'm not 100% sure I've got exactly where you want to go with this in mind, but I think there's a way of taking that thought and sort of carving a middle path. The problem effectively was this. If we don't throw in any existence requirement, we have to feel sorry for the unborn billion, billion, billions. That doesn't seem acceptable. If we throw in the existence requirement, (A), something can be bad for you only if you exist, we end up saying death isn't bad for me, because I'm not existing when I'm dead. But maybe there's a more modest way of understanding the existence requirement. Or to put the point in slightly different terms, maybe we can distinguish between two different versions of the existence requirement, a bolder and a more modest version. Let's see. Here's the modest version. Something can be bad for you only if you exist at some time or the other. Bolder claim. Something can be bad for you only if you exist at the same time as that thing. All right. These are two different ways of understanding what the existence requirement requires. The modest version is called modest because it's asking less. It says something can be bad for you only if you exist at some time or the other. The bold existence requirement adds a stronger requirement. It says something can be bad for you only if you exist at the very same time as the thing that's supposed to be bad for you. There's got to be a kind of simultaneity. If something's bad for you, you had better exist at the very same time that that bad thing is happening. That's bolder than the modest requirement. The modest requirement doesn't require that you exist that the same time as the bad thing. It only requires that you exist at some time or the other. One more minute, we'll finish up. Suppose we accept the bold claim. For something to be bad for you, you have to exist at the very same time as the bad thing. Then death can't be bad for you, because you don't exist at the time of death. Suppose, however, that we accept the modest requirement. For something to be bad for you, you have to exist at some time or the other. Well, since I do exist at some time or the other--after all, I exist right now--death can be bad for me. Admittedly, I won't exist when I'm dead. But that's okay. The modest existence requirement doesn't require that I exist at the very same time as the bad thing. The bold one did, but the modest one doesn't. So the modest one allows us to say that death is bad for me. But notice, and this is the crucial point, it does not say that nonexistence is bad for Larry, because Larry never exists at all. And so he doesn't even satisfy the modest existence requirement. In short, with no existence requirement, we have to say the unexistence of the billions and billions is bad. That seems unacceptable. With the bold existence requirement, we have to say death isn't even bad for me. That seems unacceptable. But if, instead, we accept the modest existence requirement, we're able to say, nonexistence is not bad for Larry, but death is bad for me. And so that's the view that it seems to me we should be looking at. Okay. |
YaleCourses_Philosophy_of_Death | 7_Plato_Part_II_Arguments_for_the_immortality_of_the_soul.txt | Professor Shelly Kagan: We've begun to turn to Plato's dialogue Phaedo, and what I started doing last time was sketching the basic outlines of Plato's metaphysics--not so much to give a full investigation of that--clearly we're not going to do that here--but just to provide enough of the essential outlines of Plato's metaphysical views so that we can understand the arguments that come up later in the Phaedo, basically all of which or many of which presuppose something--certain central aspects about Plato's metaphysical views. The key point behind his metaphysics then was the thought that, in addition to the ordinary empirical physical world that we're all familiar with, we have to posit the existence of a kind of second realm, in which exist the Platonic--as they're nowadays called--the Platonic forms or Platonic ideas. The sort of thing that perhaps we might call or think of as abstract objects or abstract properties. And the reason for positing these things is because we're clearly able to think about these ideas, and yet, we recognize that the ordinary physical world--although things may participate in them to varying degrees--we don't actually come across these objects or entities in the physical world. So that we can talk about things being beautiful to varying degrees, but we never come across beauty itself in the actual empirical world. We are able to talk about the fact that two plus one equals three, but it's not as though we ever come across numbers--number three itself--anywhere in the empirical world. A further point that distinguishes the empirical world from this--this realm of Platonic ideal objects--is that indeed they--there's something perfect about them. They don't change. In contrast, physical objects are constantly changing. Something might be short at one point and become tall at another point, ugly at one point and become beautiful--like the ugly duckling. It starts out ugly and becomes a beautiful swan. In contrast, justice itself never changes. Beauty itself never changes. We have the thought that these things are eternal, and indeed, beyond change, in contrast to the empirical world. In fact, if you start thinking more about the world from this perspective, the world we live in is crazy. It's almost insanely contradictory. Plato thinks of it as crazy in the way that a dream is. When you're caught up in the dream, you don't notice just how insane it all is. But if you step back and reflect on it, "Well, let's see, I was eating a sandwich and suddenly the sandwich was the Statue of Liberty, except the Statue of Liberty was my mother. And she's flying over the ocean, except she's really a piece of spaghetti." That's how dreams are. And when you're in it, it sort of all makes sense. Right? You're kind of caught up, but you step back and say, "That's just insane." Well, Plato thinks that the empirical world has something of that kind of insanity, something of that kind contradictoriness, built into it that we don't ordinarily notice. "He's a basketball player, so he's really, really tall, except he's only six feet. So he's really, really short for a basketball player. This is a baby elephant, so it's really, really big--except it's a baby elephant, so it's really, really small." The world is constantly rolling--this is a Platonic expression--rolling between one form and the other. And it's hard to make sense of. In contrast, the mind is able to grasp the Platonic ideas, the Platonic forms; and they're stable, they're reliable, they are--they're law-like and we can grasp them. They don't change; they're eternal. That's, as I say, the Platonic picture. Now, it's not my purpose here to try to argue for or against Platonism with regard to abstract entities. As I suggested in talking about the example of math last time, it's not a silly view, even if it's not a view that we all take automatically. But in thinking about math, most of us are inclined to be Platonists. We all do believe something makes it true that two plus one equals three, but it's not the fact that empirical objects--We don't do empirical experiments to see whether two plus one equals three. Rather, we think our mind can grasp the truths about numbers. Plato thought everything was like that. Well, I'm not going to argue for and against that view--just wanted to sketch it, so as to understand the arguments that turn on it. So for our purposes, let's suppose Plato was right about that and ask, what follows? Well, Plato thinks what's going to follow is that we have some reason to believe in the immortality of the soul as, again, as we indicated last time, the picture is that the mind--the soul--is able to grasp these eternal Platonic forms, the ideas. Typically, we're distracted from thinking about them by the distractions provided by the body--the desire for food, drink, sex, what have you, sleep. But by distancing itself from the body, the mind, the soul, is able to better concentrate on the forms. And if you're good at that, if you practice while you're alive, separating yourself from the body, then when your body dies, the mind is able to go up to this Platonic heavenly realm and commune with gods and other immortal souls and think about the forms. But if you've not separated yourself from the body while in life, if you're too enmeshed in its concerns, then upon the death of your body your soul will get sucked back in, reincarnated perhaps, in another body. If you're lucky, as another person; if you're not so lucky, as a pig or a donkey or an ant or what have you. So your goal, Plato says, your goal should be, in life, to practice death--to separate yourself from your body. And because of this, Socrates, who's facing death, isn't distressed at the prospect, but happy. He's happy that the final separation will take place and he'll be able to go to heaven. The dialogue ends, of course, with the death scene--Socrates has been condemned to death by the Athenians, and it ends with his drinking the hemlock, not distressed but rather sort of joyful. And the dialogue ends with one of the great moving death scenes in western civilization and as Plato says--let's get the quote here exactly right--"Of all those we have known, he was the best and also the wisest and the most upright." Just before the death scene, there's a long myth, which I draw your attention to but I don't want to discuss in any kind of detail. Plato says it's a story; it's a myth. He's trying to indicate that there are things that we can't really know in a scientific way but we can glimpse. And the myth has to do with these sort of pictures I was just describing where we don't actually live on the surface of the Earth of in the light, but rather live in certain hollows in the dark where we're mistaken about the nature of reality. Some of you who are maybe familiar with Plato's later dialogue The Republic may recognize at least what seems to me, what we have here, is a foreshadowing of the myth of the cave, or the allegory of the cave, which Plato describes there as well. Our concern is going to be the arguments that make up the center of the dialogue. Because in the center of the dialogue, before he dies, Socrates is arguing with his friends. Socrates is saying, "Look, I'm not worried. I'm going to live forever." And his disciples and friends are worried whether this is true or not. And so the heart of the dialogue consists of a series of arguments in which Socrates attempts to lay out his reasons for believing in the immortality of the soul. And that's going to be our concern. What I'm going to do is basically run through my attempt to reconstruct--my attempt to lay out the basic ideas from this series of four or five arguments that Plato gives us. I'm going to criticize them. I don't think they work, though I want to remark before I turn to them that in saying this I'm not necessarily criticizing Plato. As we'll see, some of the later arguments seem to be deliberately aimed at answering objections that we can raise to some of the earlier arguments. And so it might well be that Plato himself recognized that the initial arguments aren't as strong as they need to be. Plato wrote the dialogues as a kind of learning device, as a tool to help the reader get better at doing philosophy. They don't necessarily represent in a systematic fashion Plato's worked out axiomatic views about the nature of philosophy. It could be that Plato's deliberately putting mistakes in earlier arguments so as to encourage you to think for yourself, "Oh, this is--here's a problem with this argument. There's an objection with that argument." Some of these, Plato then may address later on. But whether or not he does address them--we're not doing Plato any honor, we're not doing him any service, if we limit ourselves to simply trying to grasp, here's what Plato thought. We could do the history of ideas and say, "Here's Plato's views. Aren't they interesting? Notice how they differ from Aristotle's views. Aren't they interesting?" and move on like that. But that's not what the philosophers wanted us to do. The great philosophers had arguments that they were putting forward to try to persuade us of the truths of their positions. And the way you show respect for a philosopher is by taking those arguments seriously and asking yourself, do they work or not? So whether or not the views that are being put forward in Socrates' mouth are the considered, reflective judgments of Plato or not, for our purposes we can just act as though they were the arguments being put forward by Plato, and we can ask ourselves, "Do these arguments work or don't they?" So I'm going to run through a series of these arguments. I'm going to, as I've mentioned before, be a bit more exegetical than is normally the case for our readings. I'm going to actually pause, periodically look at my notes and make sure I'm remembering how I think Plato understands the arguments. Of course, since the dialogue is indeed a dialogue, we don't always have the arguments laid out with a series or premises and conclusions. And so it's always a matter of interpretation, what's the best reconstruction of the argument he's gesturing towards. How can we turn it into an argument with premises and conclusions? Well, that's what I'm going to try to do for us. Also going to give the arguments names. These are not names that Plato gives, but it will make it easy for us to get a fix, roughly, on the different arguments as we move from one to the next. So the first argument, and the worry that gets the whole things going, is this. So, we've got this nice Platonic picture where Plato says, "All right. So the mind can grasp the eternal forms, but it has to free itself from the body to do that." And so, the philosopher, who has sort of trained himself to separate his mind from his body, to disregard his bodily cravings and desires--the philosopher will welcome death because at that point he'll truly, finally, make the final break from the body. And the obvious worry that gets raised in the dialogue at this point is this: How do we know that when the death of the body occurs the soul doesn't get destroyed as well? That's the natural worry to have. Maybe what we need to do is separate ourselves as much as possible from the influence of our body without actually going all the way and breaking the connection. If you think about it like a rubber band, maybe the more we can stretch the rubber band the better; but if you stretch too far and the rubber band snaps, that's not good, that's bad. It could be that we need the body in order to continue thinking. We want to free ourselves from the distractions of the body, but we don't want the body to die, because when the body dies the soul dies as well. Even if we are dualists, as we've noticed before--even if the soul is something different from the body--it could still be the case, logically speaking, that if the body gets destroyed, the soul gets destroyed as well. And so, Socrates' friends ask him, how can we be confident that the soul will survive the death of the body and indeed be immortal? And that's what prompts the series of arguments. Now, the first such argument I dub "the argument from the nature of the forms." And the basic thought is fairly straightforward. The ideas or the forms--justice itself, beauty itself, goodness itself--the forms are not physical objects. Right? We don't ever bump into justice itself. We bump into societies that may be more or less just, or individuals who may be more or less just, but we never bump into justice itself. The number three is not a physical object. Goodness itself is not a physical object. Perfect roundness is not a physical object. Now, roughly speaking, Socrates' seems to think it's going to follow straightforwardly from that that the soul must itself be something non-physical. If the forms are not physical objects, then Socrates thinks it follows they can't be grasped. We can certainly think about the forms, but if they're non-physical they can't be grasped by something physical like the body. They've got to be grasped by something non-physical--namely, the soul. But although that's, I think, the sketch of where Socrates wants to go, it doesn't quite give us what we want. On the one hand, even if it were true that the soul must be non-physical in order to grasp the non-physical forms, wouldn't follow that the soul will survive the death of the body. That's the problem we've been thinking about for the last minute. And there's something puzzling. We might wonder, well, just why is it that the body can't grasp the forms? So there's a fuller version of the argument that's the one I want to focus on. And indeed, I put it up on the board. So Platonic metaphysics gives us premise number one--that ideas, forms, are eternal and they're non-physical. Two--that which is eternal or non-physical can only be grasped by the eternal and the non-physical. Suppose we had both of those. It would seem to give us three, the conclusion we want--that which grasps the ideas or the forms must be eternal or non-physical. What is it that grasps the ideas or the forms? Well, that's the soul. If that which grasps the ideas or the forms must be eternal/non-physical, well one thing we're going to get is, since that which grasps the forms must be non-physical, the soul is not the body. Since that which grasps the ideas or forms must be eternal or non-physical--it's eternal, it's immortal. All right. Let's look at this again more carefully. Ideas or forms are eternal; they're non-physical. Well, I've emphasized the non-physical aspect, and I've emphasized as well that they're not changing. But perhaps it's worth taking a moment to emphasize the eternal aspect of the forms. Now, people may come and go, but perfect justice--the idea of perfect justice--that's timeless. Nothing that happens here on Earth can change or alter or destroy the number three. Two plus one equaled three before there were people; two plus one equals three now; two plus one will always equal three. The number three is eternal, as well as being non-physical. So the Platonic metaphysics says quite generally, if we're thinking about the ideas or the forms, the point to grasp is they're eternal; they're non-physical. The crucial premise--since we're giving Plato number one--the crucial premise for our purposes is premise number two. Is it or isn't it true that those things which are eternal or non-physical can only be grasped by something that is itself eternal and non-physical? Again, it does seem as though the conclusion that he wants, number three, follows from that. If we give him number two, it's going to follow that whatever's doing the grasping--call that the soul since the soul is just Plato's word for our mind--if whatever's doing the grasping of the eternal and non-physical forms must itself be eternal and non-physical, it follows that the soul must be non-physical. So the physicalist view is wrong and the soul must be eternal. The soul is immortal. So Socrates has what he wants, once we give him premise number two, that the eternal, non-physical can only be grasped by the eternal, non-physical. As Socrates puts it at one point, "The impure cannot attain the pure." Bodies--corruptible, destroyable, physical, passing--whether they exist or not, whether they exist for a brief period and then they cease to exist--these impure objects cannot attain, cannot grasp, cannot have knowledge of the eternal, changeless non-physical forms. "The impure cannot attain the pure." That's the crucial premise, and what I want to say is, as far as I can see there's no good reason to believe number two. Now, number two is not an unfamiliar--premise number two is not an unfamiliar claim. I take it the claim basically is that, to put it in more familiar language, it takes one to know one. Or to use it, slightly other kind of language that Plato uses at various points, "Likes are known by likes." But it takes one to know one is probably the most familiar way of putting the point. Plato's saying, "What is it that we know? Well, we know the eternal forms; takes one to know one. So we must ourselves be eternal." Unfortunately, this thought, popular as it may be, that it takes one to know one, just seems false. Think about some examples. Well, let's see, a biologist might study, or a zoologist might study, cats. Takes one to know one, so the biologist must himself be a cat. Well, that's clearly false. You don't have to be feline to study the feline. Takes one to know one; so, you can't be a Canadian and study Mexicans, because it takes one to know one. Well, that's just clearly stupid. Of course the Canadians can study the Mexicans and the Germans can study the French. It does not take one to know one; to understand the truths about the French, you do not yourself need to be French. Or take the fact that some doctors study dead bodies. Aha! So to study and grasp things about dead bodies, corpses, you must yourself be a dead body. No, that certainly doesn't follow. So if we start actually pushing ourselves to think about examples--does it really take one to know one--the answer is, at least as a general claim, it's not true. It doesn't normally take one to know one. Now, strictly speaking, that doesn't prove that premise two is false. It could still be that, although normally you don't have to be like the thing that you're studying in order to study it, although that's not normally true, it could be that in the particular case of non-physical objects, in the particular case of eternal objects, you do have to be eternal, non-physical to study them. It could be that even though the general claim, "it takes one to know one" is false, the particular claim, "eternal, non-physical can only be grasped by the eternal, non-physical," maybe that particular claim is true. And it's only the particular claim that Plato needs. Still, all I can say is, why should we believe two? Why should we think there's some--Even though, normally, the barrier can be crossed and Xs can study the non-X, why should that barrier suddenly become un-crossable in the particular instance when we're dealing with Platonic forms? Give us some reason to believe premise two. I can't see any good reason to believe premise two, and as far as I can see, Plato doesn't actually give us any reason to believe it in the dialogue. Consequently, we have to say, as far as I can see, we haven't been given any adequate argument for the conclusion that the soul--which admittedly can think about forms and ideas--we have no good reason yet to believe, to be persuaded, that the soul must itself be eternal and non-physical. That's the first argument. As I say though, Plato may well recognize the inadequacy of that argument, because after all Socrates goes on to offer a series of other arguments. So let's turn to the next. I call the second argument "the argument from recycling"--not the best label I suppose, but I've never been able to come up with a better one. And the basic idea is that parts get re-used. Things move from one state to another state and then back to the first state. So, for example, to give an example that Plato actually gives in the dialogue, we are all awake now, but previously we were asleep. We went from being in the realm of the asleep to being in the realm of the awake, and we're going to return from the realm of the awake back to the realm of the asleep and over and over and over again. Hence, recycling. I think that actually a better example for Plato's purposes, not that I expect him to have this particular example, but, would be a car. Cars are made up of parts that existed before the car itself existed. There was the engine and the steering wheel and the tires and so forth. And these parts got assembled and put together to make up a car. So the parts of the car existed prior to the existence of the car itself. And the time is going to come when the car will cease to exist but its parts will still be around. Right? It'll get taken apart for parts, sold for parts. There will be the distributor cap, and there will be the tires, and there will be the carburetor, there will be the steering wheel. Hence, the name, that I dub the argument, "the argument from recycling." That's the nature of reality for Plato. And it seems like a plausible enough view. Things come into being by being composed of previously existing parts. And then, when those things cease to have the form they had, the parts get used for other purposes. They get recycled. If we grant that to Plato, he thinks we've got an argument for the immortality of the soul. Because after all, what are the parts that make us up? Well, there are the various parts of our physical body, but there's also our soul. Remember, as I said, in introducing the Phaedo, Plato doesn't so much argue for the existence of something separate, the soul, as presuppose it. His fundamental concern is to try to argue for the immortality of the soul. So he's just helping himself to the assumption that there is a soul. It's one of the parts that makes us, that goes up into making us up, goes into making us up. It's one of the pieces that constitutes us. Given the thesis about recycling, then, we have reason to believe the soul will continue to exist after we break. Even after our death, our parts will continue to exist. Our body continues to exist even after our death. Our soul will continue to exist. Well, there's a problem with the argument from recycling, and it's this. Even if the recycling thesis shows us that we're made up of something that existed before our birth and that some kinds of parts are going to have to exist after our death, we can't conclude that the soul is one of the parts that's going to continue to exist after our death. Consider some familiar facts about human bodies. As we nowadays know, human bodies are made up of atoms. And it's certainly true that the atoms that make up my body existed long before my body existed. And it's certainly true that after my death those atoms are going to continue to exist. So there's some--and will eventually get used to make something else. So Plato's certainly right about recycling as a fundamental truth. The things that make me up existed before, and will continue to exist after my death. But that doesn't mean that every part of my body existed before I was born, and that every part of my body will continue to exist after I die. Take my heart. My heart is a part of my body. Yet, for all that, it didn't exist before my body began to exist. It came into existence as part of, along with, the creation of my body, and it won't continue to exist, at least not very long, after the destruction of my body. There'll be a brief period in which, as a cadaver I suppose, my heart will continue to exist. But eventually my body will decompose. We certainly wouldn't have any grounds to conclude my heart is immortal, will exist forever. That just seems wrong. So even though it's true that some kind of recycling takes place, we can't conclude that everything that's now a part of me will continue to exist afterwards. It might not have been one of the parts, one of the fundamental parts, from which I'm built--like the heart. And if that's right, if there can be parts that I have now that weren't one of the parts from which I was made, there's no particular reason to think it's going to be one of the parts that's going to continue to exist after I die. Once we see that kind of worry, we have to see, look, the same thing could be true for the soul. Even if there is an immortal soul--Sorry. Even if there is a non-physical soul that's part of me, we don't yet have any reason to believe that it's one of the fundamental building blocks that were being recycled. We don't have adequate reason to conclude that it's something that existed before I was put together, it's something that will be recycled and continue to exist after I fall apart, after my body decomposes, after I'm separated from my body, or what have you. Even if recycling takes place, we don't have any good reason yet to believe that the soul is one of the recycled parts. So it seems to me "the argument from recycling," as I call it, is not successful either. Now, as I say, many times when you read the dialogue, this or other dialogues by Plato, it seems as though he's fully cognizant of the objections that at least an attentive reader will raise about earlier stages of the argument. Because sometimes the best way to understand a later argument is to see it as responding to the weaknesses of earlier arguments. And I think that's pretty clearly what's going on in the very next argument that comes up in the dialogue. The objection I just raised, after all, to the argument from recycling, said, in effect, even though some kind of recycling takes place, not all my parts get recycled, because not all of my parts were among the pre-existing constituent pieces from which I am built up. We don't have any particular reason to think my heart's one of the prior-existing pieces; we don't have any good reason to assume that my soul's one of the prior-existing pieces. Well, Plato's very next argument attempts to persuade us that indeed we do have reason to believe that the soul is one of the prior-existing pieces. And this argument is known as "the argument from recollection." The idea is, he's going to tell us certain facts that need explaining, and the best explanation involves a certain fact about recollecting, or a certain claim about recollecting or remembering. But we can only remember, he thinks, in the relevant way if our soul existed before the birth of our body, before the creation of our body. All right. What's the crucial fact? Well, let's start by--Plato starts by telling us, reminding us of what it is to remember something. Or perhaps a better word would be what is it to be reminded of something by something else that resembles it but is not the thing it reminds you of. I might have a photograph of my friend Ruth. And looking at the photograph reminds me of Ruth. It brings Ruth to mind. I start thinking about Ruth. I remember various things I know about Ruth. The photograph is able to do that, is able to trigger these thoughts. But of course, the photograph is not Ruth. Right? Nobody would--who's thinking clearly--would confuse the photograph with my friend. But the photograph resembles Ruth. It resembles Ruth well enough to remind me of her, and interestingly, it can do that even if it's not a very good photograph. You might hold up the photograph and I might say, "Gosh, that really doesn't look very much like Ruth does it?" Even though I see that it is a photograph of Ruth; it reminds me of her. Now, how could it be that a photograph reminds me of my friend? Well, this isn't some deep mystery. Presumably the way it works is, as I just said, it looks sort of like her. It doesn't have to look very much like her. It looks sort of like her. Your young brother or sister, or my little children, can draw pictures of family members that barely look like family members. My niece drew a picture of my family once when she was three. It didn't look very much like us at all, but we could sort of see the resemblance in a vague kind of way, right? So it's got to look at least somewhat like the missing friend. But that's not enough. You've never met Ruth, let's suppose. I hold up the photograph without having told you anything about her. The photograph's not going to remind you of Ruth. Why not? Well, you don't know Ruth. So the pieces we need are not only an image of Ruth, even if an imperfect image of Ruth, we also need some prior acquaintance with Ruth. That's pretty much what it takes, right? So on the one hand--temporal sequence--first you know Ruth, you meet Ruth, you get to know Ruth. Then at a later time you're shown an image of Ruth--maybe not even an especially good image of Ruth--but good enough to remind you. And suddenly, you're remembering things you know about Ruth. That's how recollection works. All right. Now, Plato points out that we all know things about the Platonic forms. But the Platonic forms, as we also know, are not to be found in this world. The number three is not a physical object, perfect roundness is not a physical object, perfect goodness is not a physical object. We can think about these things; our mind can grasp them, but they're not to be found in this world. Yet, various things that we do find in this world get us thinking about those things. I look at the plate on my kitchen table, it's not perfectly round, it's got imperfections; but suddenly I start thinking about circles, perfectly round objects. I look at somebody who's pretty. He or she is not perfectly beautiful, but suddenly I start thinking about the nature of beauty itself. Ordinary objects in the world participate to a greater or lesser degree in the Platonic forms. That's Plato's picture of metaphysics. And we bump up against, we look at, we have interactions with these everyday objects and, somehow, they get us thinking about the Platonic forms themselves. How does it happen? Plato has a theory. He says, "These things remind us of the Platonic forms." We see something that's beautiful to some degree, and it reminds us of perfect beauty. We see something that's more or less round, and it reminds us of perfect circularity. We see somebody who's fairly decent morally, and it reminds us of perfect justice or perfect virtue. It's just like the photograph, perhaps the not very good photograph, that reminds me of my friend Ruth. All right. Well, there's an explanation of how it could be that things that are not themselves perfectly round could remind us, could make us think about perfect roundness. But then Plato says, "Okay, but keep in mind all of what you need in order to have reminding, to have recollecting take place." In order for the photograph to remind me of Ruth, I have to already have met Ruth. I have to already be acquainted with her. In order for a more or less round plate to remind me of roundness, Plato says, I have to have already met perfect roundness itself. In order for a more or less just society to remind me of justice itself, so that I can start thinking about the nature of justice itself, I have to somehow have already been acquainted with perfect justice. But how and when did it happen? Not in this life, not in this world. In this world nothing is perfectly round, nothing is perfectly beautiful, nothing is perfectly just. So it's got to have happened before. If seeing the photograph of my friend now can remind me of my friend, it's got to be because I met my friend before. If seeing things that participate in the forms remind me of the forms, it's got to be because I've met or been acquainted directly with the forms before. But you don't bump up against, you don't meet, you don't see or grasp or become directly acquainted with, the forms in this life. So it's got to have happened before this life. That's Plato's argument. Plato says, thinking about the way in which we grasp the forms helps us to see that the soul must have existed before birth, in the Platonic heavenly realm, directly grasping, directly communing with, directly understanding the forms. It's not taking place in this life, so it has to have happened before. Well, look, now we've got the kind of argument we were looking for. Earlier the objection was, we had no good reason to think the soul was one of the building blocks from which we're composed; we have no good reason to think it's one of the pieces that was around before our body got put together, before our birth. Socrates says, "No. On the contrary, we do have reason, based on the argument from recollection, to conclude that the soul was around before we were born." All right. So the next question is, is the argument from recollection a good one? Now, let's say, I'm not really much concerned with whether this was an argument that Plato thought worked or not. Our question is, do we think it works or not? Although this is a form of an argument that Plato does put forward in other dialogues as well, and so it strikes me so there's at least some reason to think this is an argument that he felt might well be right. The crucial premise--Again, we're going to just grant Plato the metaphysics. The crucial question is going to be, is it right that in order to explain how it is we could have knowledge of the forms now that we have to appeal to a prior existence in which we had direct acquaintance? It's not obvious to me that that's true. It's not obvious to me for a couple of reasons. One question is this: Is it really true that in order to think about the perfectly straight, I must have somehow, somewhere at some point come up against, had direct knowledge of, the perfectly straight? Isn't it enough for me to extrapolate from cases that I do come up against in this life? I come across things that are bent; I come across things that are more straight, more and more straight. Can't my mind take off from there and push straight ahead to the idea of the perfectly straight, even if I never have encountered it before? Let me stop with this idea. Even if Plato is right, that we need to have acquaintance with the Platonic forms themselves in order to think about them, and even if Plato is right that we never get the acquaintance in this world, in the interaction with ordinary physical objects, why couldn't it be that our acquaintance with the Platonic forms comes about in this life for the very first time? That's the question, or that's the objection, that we'll turn to at the start of next class. |
YaleCourses_Philosophy_of_Death | 22_Fear_of_death.txt | Professor Shelly Kagan: Last time, I distinguished between two ways in which thinking about the facts about the nature of death could influence our behavior. On the one hand, it could give us reasons to behave or respond differently, and on the other hand it could merely cause us to behave differently. Insofar as it just happens to be some fact about human psychology that we behave this way or that way, perhaps the appropriate way to deal with the facts of death would be to simply disregard them. I'm inclined to believe, however, that there are ways in which thinking about the facts would not merely cause us to behave one way rather than another but give us reason to behave one way rather than another. And that's the question that I want to then explore from here on out. In what circumstances, or in what ways, should we behave one way rather than another? So, I'm not merely going to draw on facts about how, as it happens, we behave. It could be that if you dwelled upon the facts about death, you would scream interminably until the moment you died--taking a tip from Tolstoy. But that doesn't itself show that that's an appropriate response; that might just be a mere causal fact about how we're built. The question I want to ask is, how is it appropriate, in what ways is there reason, to react one way rather than another? Now as I say, the thought seems very compelling for most of us that there are ways in which it makes sense for the facts about death to influence how we live, what our attitudes are, what are emotions are. Kafka, for example, said the meaning of life is the fact that it ends. Nice little cryptic saying, as is typical of Kafka. But the suggestion, I suppose, is a fairly common one, that it's something deep about how we should live, that we're going to die, that our life will come to an end. And the question we want to then explore is how should the fact, how should recognizing the fact, that we're going to die, influence how we live? How should we respond to that fact? Now actually, the very first kind of behavior, quote/unquote behavior, that I want us to think about perhaps isn't strictly speaking a form of behavior at all. I rather have in mind our emotional response, because indeed one of the most common reactions to death, I suppose, is fear of death. Indeed, fear may in many cases be too weak a term--an extremely strong form of fear--terror of death is, I suppose, a very common emotional response to death. And what I want to do next is have us ask ourselves, well, is fear of death a rationally appropriate response? Now, the crucial word here is "appropriate." I don't want to deny at all what I take to be the empirical fact that many people are afraid of death. How common a reaction that is, and how strong the fear is, I suppose that would be something for psychologists or sociologists to study. And I'm not interested in that question. I take it that fear of death is very common. I want to know, is fear of death an appropriate, a reasonable emotion? Now, in raising that question, I'm obviously presupposing the larger philosophical thesis that it makes sense to talk about emotions as being appropriate or inappropriate. We can ask not only what emotions does somebody have, but we can also ask what emotions should they have? Now, this point perhaps isn't an obvious one, so maybe it's worth dwelling on for a moment or two, before we turn to fear of death per se. What's another example of an emotion that's got some appropriateness conditions? So, in a moment I'll turn to asking, what are the conditions under which it's appropriate to be afraid of something?--but to make the more general point, look, take something like pride; pride's an emotion. Under what conditions does it make sense to be proud of something? Well, I suppose at least two conditions jump out. First of all, the thing that you're proud of has to be some kind of accomplishment. If you were to say to me right now, "I'm really proud of the fact that I'm breathing," I'd look at you in a noncomprehending fashion because it doesn't seem to me that breathing is difficult in any way, doesn't count as an accomplishment, and as such I can't understand how or why you would be proud of the fact that you're breathing. Now, maybe if you suffered from asthma and you had to have gone through excruciating physical therapy in order to learn how to use your lungs after some accident or something; maybe if we told a story like that we could see how breathing naturally and normally would be an accomplishment, something to be proud of. But for all of us, I presume, it's not an accomplishment; hence it's not something that it's appropriate to be proud of. Even if we've got an accomplishment, that may not be enough. For something to be something that it makes sense for you to be proud of it, it's got to be in some way an accomplishment that reflects well on you. Now, the most straightforward cases are cases where it's your accomplishment, and the reason that pride makes sense is because you're the one who did this difficult thing. So, you got an A on your philosophy paper and you tell me that you're proud and I understand that; getting an A on a philosophy paper is an accomplishment, and if you wrote the paper then I understand why you're proud. Of course, if what you did was go on the Internet and go to one of those sites where you pay money and somebody else writes an A paper for you, well, I understand why maybe they should be proud that they've written a great philosophy paper, but I don't see how this reflects especially positively upon you. So again, there's a kind of appropriateness condition for pride, where the object or the event or the activity that you're proud of, or the feature, has to somehow reflect on you. Now, that's not to say that it's got to be your accomplishment, at least not in any straightforward, narrow sense. It makes sense, for example, to be proud of your children's accomplishments because there's the right kind of connection between you and your children. So, in some sense it's connected to you. And we can have cases where we wonder about whether or not the connection is tight enough or what exactly the nature of the connection has to be. Perhaps as an American you took pride when the Americans win some event at the Olympics or the Tour de France or what have you, and you say to yourself, "well look, I didn't ride the bicycle but for all that I'm an American and an American won, I'm proud." And that makes sense; we can understand how you think the connection there is tight enough. On the other hand if you say, "look, the Germans won the event in the Olympics and I'm really proud," and I ask, well, are you yourself German, do you have German heritage, did you contribute to the German Olympics support team? If none of that's true, then again the appropriateness condition doesn't seem to be satisfied. It doesn't make sense to be proud. All right, look, we could spend more time worrying about the conditions under which it makes sense to feel pride. But of course that's not really my purpose here. My purpose of bringing that in was just to try to make good on the thought that emotions do have requirements; not necessarily requirements for what you have to have in place in order to feel the emotion. It's a harder question whether all these things need to be in place in order to feel the emotion. But at least these things need to be in place in order for it to make sense for you to have the emotion, in order for it to be rational or reasonable to feel the emotion, in order for that emotional response to be an appropriate response to your circumstances or situation. So, let's ask ourselves, then, what are the appropriateness conditions for fear? Because armed with that set of conditions, we'll then be able to go on and ask, is it appropriate to feel fear of death? Now, three conditions come to mind when I think about this question, when I've thought about this question over the years. The first is this--and I suppose this first one's going to be fairly uncontroversial--in order to be afraid of something--even though I slipped in this, to talk about what you need to have in order to feel fear, what I really mean is in order for it to make sense to feel fear--the thing that you're afraid of has to be bad. If somebody were to say to me, "I'm afraid that after class somebody's going to give me an ice cream cone", again I'd look at them in noncomprehension. I'd say, "Why are you afraid of that? How could it make any sense to be afraid?" And again, it's not that somebody couldn't give you an answer. They'd say, "Oh I'm trying to lose weight but I'm so weak and if they give me an ice cream cone then I'll just eat it and that'll ruin my diet for the week," well, then I'd understand. From that point of view an ice cream cone is a bad thing and so that first condition on fear would be satisfied. But if you don't have a story like that, if you're like most of us, most of the time, and an ice cream cone's a pretty good thing, a source of some passing but at least genuine pleasure, then you say, "How can you be afraid of having or getting or eating an ice cream cone?" It doesn't make sense. To be afraid of something, it's got to be bad. It's one of the reasons why we sometimes look askance at people who have various kinds of phobias--fear of spiders or fear of dust or what have you, fear of bunnies--and you think, how does this make any sense? It's this cute little bunny; it's not dangerous. And maybe there are poisonous spiders, but most of the spiders we run across here in Connecticut are not poisonous. Fear of spiders doesn't seem appropriate. It's not that people can't have this kind of emotional reaction, it's that it doesn't make sense. Maybe it's another matter if you live in Australia, where there's poisonous snakes and spiders and other insects everyplace. All right. So, condition number one: Fear requires something bad, as the object of your fear. I can fear getting a migraine, if I'm subject to migraines. I can't fear the pleasure of looking at a beautiful sunset. That's condition number one--bad object, something harmful. Condition number two is, there's got to be a nonnegligible chance of the bad state of affairs happening, of the bad object coming to you. It's not enough that it's a logical possibility for fear to be a reasonable reaction. There's nothing logically inconsistent or logically incoherent about the possibility that I will face my death by being ripped to pieces by Siberian tigers. It's not as though that's an inconsistent state of affairs. It's certainly logically possible, but it's so unlikely, it's so negligibly small a chance, that if anybody here is afraid that they'll be ripped to pieces by tigers, then I can only say the fear doesn't make any sense, it's not appropriate. Again, we can tell special stories where that might be different. Suppose you tell me that, oh, when you're not a student, your work study program, what you do is, you work as an animal trainer, or you're planning to work in the circus where you'll be training tigers, then I'll say, all right, now I suppose there's a nonnegligible chance you'll be mauled and killed by tigers. I understand it. But for the rest of us, I suppose, the chance of being killed by tigers is, well, it's not literally zero, but it's close to zero, it's negligible. And so, fear of being eaten by tigers or mauled to death by tigers doesn't make any sense. And once you get the point, of course, it would be easy to talk about a variety of other things that the chances are so small--fear of being kidnapped by space creatures from Alpha Centauri, where I'll be taken back to the lab and they'll prod me before they dissect me alive without anesthetic. Yes, I suppose there's some possibility of that. It's not logically impossible. But again it's so vanishingly small a chance, and anybody who actually is afraid of that, the appropriate thing for us to say is that their fear is not appropriate. All right, so you need to have a chance of the bad thing, and it's got to be a large enough chance. And I suppose again there would be room for us to argue about how large a chance is large enough, but when you have vanishingly small chances then the fear doesn't make any sense. That's condition number two. Condition number three, I think, is somewhat more controversial, but for all that it still seems correct to me, and that's this. We need to have a certain amount of uncertainty in order to have fear be appropriate. You need to have some--it's not clear how much--but at least some significant amount of uncertainty about whether the bad thing will occur, and/or how bad the bad thing will be. To see the point, to see the relevance of this third condition, imagine that a bad thing was going to happen to you with a nonnegligible chance. Indeed, far from being so small that it's virtually not worth even considering, imagine that it's guaranteed that the bad thing is going to happen. So, there's a bad thing that's going to happen, and you know precisely how bad it is. So you've got certainty with regard to the fact that the bad thing is going to happen, and certainly with regard to the size of the bad thing. I put it to you that in circumstances like that, fear is not an appropriate emotional response. Suppose that what happens is this. Every day you come to school, to the office, whatever it is, and you bring a bagged lunch, and you put it in the office refrigerator. And you include, along with your lunch, a dessert; let's say a cookie. And every day at one o'clock, when you go to grab your lunch out of the refrigerator, you look inside and you see somebody has stolen your cookie. Well, it's a bad thing; it's not the worst thing in the world, but it's a bad thing to have somebody steal your cookie. And furthermore, this is more than a negligible chance. So, we've got condition one, condition two in place--bad thing and a nonnegligible chance of it happening. In fact, not only is it not a negligible chance that it's happening--guaranteed, it happens day after day after day after day. Bad thing, guaranteed. And you know precisely how bad it is. I put it to you, fear in that case doesn't make any sense. Mind you, there are other negative emotions that probably make sense, like anger and resentment. Who does this thief, whoever it is, think that he or she is, to be stealing your cookie? They don't have the right to do that! You can be angry, you can be resentful. You can be sad that you don't have a dessert, day after day after day. But you can't be afraid, because there's nothing here that it makes sense for you to be afraid of. Again, being a little sloppy, maybe you are afraid, but if so, fear doesn't make sense, when you know for a certainty that the bad thing is coming and how bad it is. Suppose that the thief strikes at random, taking different people's desserts from different bags at different times of the week, and you never know who he or she is going to steal from. Then you might be afraid that you'll be the person whose cookie got stolen. Or if cookie seems to you too silly an example, imagine that what happens is somebody breaks into dorm rooms. There's been a thief going around various dorms on campus and stealing the computer from the dorm room. Well there, fear makes sense; you're afraid that they'll steal your computer. Bad thing, nonnegligible chance, and lack of certainty. On the other hand, suppose what happens is, this is one of those thieves like you always have in the movies, where he's such a master thief, or she's such a master thief, that they take pride in their work, and so they announce it. They take out an ad in the Yale Daily News and they say, "On Wednesday, April 27^(th), I shall steal the computer from so-and-so's room." And it doesn't matter what precautions you take, something always happens, and that person's computer gets stolen. Well, again, you could be angry, you can be pissed, you can be annoyed, you can feel stupid that you didn't take adequate precautions. But when the ad appears, with your name, and that date, and all year the thief has always carried through on the announced theft, I put it to you, fear doesn't make any sense, because if you know exactly what the size of the harm is going to be, and you're guaranteed that the harm is coming, fear is no longer appropriate. Suppose that I have a little torture machine, a little pain generator, where I put your hand down and I hook it up to the electrodes and I crank the dial and I pull the switch, and you feel an electric shock. It makes sense to feel fear what the next shock is going to feel like, if the shocks vary in their intensity. But if the machine's only got one setting, on and off, and all the shocks feel exactly the same, and I've done it for you, "so look, okay, let me show you what it feels like; it feels like that." Oh, not comfortable. Let me show you what it feels like, it feels like that; over and over, 5,6, 7,8 times; we're doing some sort of weird psychology experiment here. Well, you know exactly that it's coming, you know exactly what it's going to feel like. Fear, I put it to you, doesn't make any sense. Suppose the experiment's over now, and you think--you've gotten your ten dollars and I refuse to let you go and I say, "I'm going to do it one more time, no worse than before." Well, you might not believe me and that might introduce the element of uncertainty and then perhaps fear would be appropriate. But if you believe me that one more pain exactly like the ones you felt before is coming, fear--anger makes sense, resentment makes sense, sadness that you're going to feel this pain perhaps makes sense--but fear doesn't make sense. So, three conditions. You need to have it's something bad. You need to have on the one hand nonnegligible chance that the bad thing's going to happen, and you have to have a lack of certainty. If you've got certainty as to the nature of the bad and certainty that it's coming, then fear doesn't make sense. One of the points probably worth mentioning in passing--even when fear does make sense, there's a kind of proportionality condition that we need to keep in mind as well. Even if there's a nonnegligible chance of the harm coming, and so fear is appropriate, it doesn't make fear appropriate if it's obsessive fear, horrendous fear, tremendous fear. Maybe some mild concern is all that's appropriate if the chances are small. Similarly, the amount of fear needs to be proportioned to the size of the bad. That's perhaps why the cookie example, you might think a lot of fear there's not appropriate because even if it comes, how bad is it? Loss of a cookie. All right, so there are some conditions that need to be met before fear is appropriate at all, and on the other hand even when fear is appropriate, it's still legitimate to ask, how much fear is appropriate? So, armed with all of this, let's now turn to the question, is fear of death appropriate, and if so, how much? And immediately we see we need to draw some distinctions. Well, what are we supposedly being afraid of when we are afraid of death? And two or perhaps three things need to be distinguished. The first thing you might worry about is the process of dying. Some people find that the actual process at the end of their life is a painful and unpleasant one. Yes, I've given the example of being mauled to death by tigers or eaten alive by tigers. Well, I imagine that would be a pretty unpleasant way to die. And so insofar as there is some nonnegligible chance that you will die a painful death, then I suppose there's some room for some--an appropriate amount--of fear. Of course, we then have to ask, well, what is the chance that you'll die painfully? I've already indicated for people in this room I rather imagine the chance of being mauled to death by tigers is vanishingly small. So, I think, no fear of that form of painful death is appropriate. And for that matter, I've got to suggest that I suspect that fear of dying through a painful operation by the aliens from Alpha Centauri is not appropriate either. Still, the sad fact of the matter is that there are people in the world who do suffer painful deaths, in particular, of course, because the number of diseases that might kill us off in their final stages are sometimes painful. Now, one of the interesting facts is that we could of course minimize or eliminate the pain by giving people adequate pain medication. And so, it comes as a rather unpleasant bit of news that most hospitals do not provide adequate pain medication, in many, many instances, at the end of life. Why? That's a whole other complicated question. But I suppose if somebody were to say to me that--look, I read the newspaper, there are studies done periodically about whether or not there's adequate pain medication at the end of life and the studies suggest, year after year, that no, we still don't in general provide adequate pain medication. If you were to say to me, "In light of that I've got some fear that this may happen to me," well, I'd understand that. Again, if you said to me, "I can't sleep for fear that this is going to happen to me," I'd want to say, well, that sort of fear strikes me as disproportionate. But at any rate, I suppose that when people say that they're afraid of death, although some of them, in some moments, might have in mind, what they mean is that they're afraid of the process of dying, I take it that that's not actually the central fear that people mean to be expressing. People mean to suggest that they're afraid of death itself, they're afraid of being dead. And with regard to that, I want to suggest, I don't actually think the relevant conditions are satisfied. Look, let's think about what they were again. There was a certain amount of uncertainty. Well, of course, with regard to being dead there's no uncertainty at all. You're guaranteed that you're going to die. And indeed, condition number one, that the bad thing--for fear to make sense the object of my fear has to be a bad thing. Well, let's ask ourselves, is being dead intrinsically a bad thing? It doesn't seem to me that it is. Of course, this all presupposes the positions about the nature of death that I argued for in the first half of this semester. There's nothing mysterious or unknown about death. Look, suppose you thought there was. Suppose you believe in the afterlife, or at least the possibility of an afterlife, and you're worried that you might go to hell. Well, then fear makes some sense. If there's a possibility, nonnegligible in your mind, that there'll be a painful experience after you die--not guaranteed--if you're a bad enough sinner so that you're certain you're going to hell, then again I think condition number three isn't satisfied. But if like most of us you wouldn't know if you were a bad enough sinner or not, and so there's some nonnegligible chance of this bad thing, without certainly, well, somebody like that who says they're afraid of being dead, for fear that they might find themselves in hell, at least I understand that. But on the physicalist picture where death is the end, where when your body decays there's no experience at all, then it seems to me that the first condition on fear isn't satisfied. The badness of death after all, according to the deprivation account, is the mere absence of a good. And it seems to me the mere absence of a good is not the right kind of thing to be afraid of. Suppose I give you an ice cream cone, and you like it. You wish you could have a second ice cream cone. But I don't have a second ice cream cone to share with you. So you know that after the first ice cream cone is over, you won't have a second ice cream cone. That's a pity, that's a lack of something good. And now you're telling me, "I'm afraid; I'm afraid of the fact that there will be this period after the first ice cream cone is done in which I'm not getting a second ice cream cone. I'm afraid because of the badness of deprivation of ice cream." I say to you, deprivations per se are not the kind of thing to be afraid of; they're not bad in the right kind of way. So, if death is bad only or most centrally insofar as it's a deprivation of the good things in life, there's nothing bad there to be afraid of. Well, that doesn't mean there isn't anything here in the neighborhood. After all, we have to worry not just about the fact that we're going to die, we have to worry about when we're going to die. We might be certain that death is going to come, but we're not certain that death is going to come a long time from now, as opposed to soon. So, perhaps the relevant thing to be afraid of is the possibility that you'll die soon. Consider an analogy. Suppose that you're at a party, it's a great party, you wish you could stay and stay and stay, but this is taking place back in high school, and what's going to happen is your mother is going to call at a certain point and tell you it's time to go home. Now, let's just imagine there's nothing bad about being at home; it's neutral. You just wish you could stay but you know you can't. If you know the call is going to come at midnight, guaranteed, then there's nothing to be afraid of. You might resent the fact that your mother is going to call you at midnight, be annoyed at the fact that she won't let you stay out till one o'clock like your other friends, but there's nothing to be afraid of. There, it's 11 o'clock and you're saying, "I'm terrified of the fact that the call's going to come at midnight; I know it's going to come." See, fear there doesn't make sense, because it doesn't have the relevant degree of uncertainty. You know exactly what's coming and you know for a certainty that it's coming; fear isn't appropriate. Well, suppose instead of what happens is a guarantee that your mother's going to call at midnight, what we've got is your mother's going to call sometime between 11 and 1. Now, some fear makes sense. Most of the time she calls around 12,12:30; sometimes she calls at 1 for parties; occasionally she calls at 11. You're worried now, there's a nonnegligible chance she'll call at 11 rather than sometime later, 12 or 1 o'clock. There's a bad thing, some nonnegligible chance, and the absence of certainty. Now some degree of fear makes sense. And perhaps that's what we've got with regard to death. If so, we might say the crucial ingredient here, by virtue of which death is something that it's appropriate for us to be afraid of is because of the unpredictability. Even if we had variability we might not have unpredictability. That's a point that we touched upon previously. It's the unpredictability that leaves you in a position of not knowing whether death will come soon, or death will come late. Will you die at 20, will you die at 50, will you die at 80, or will you die at 100? It seems to me that if it weren't for the unpredictability, fear of death wouldn't make any sense at all. Given that we do have unpredictability, some fear of death might make sense; although again it's important to be clear about what it is that it makes sense to be afraid of. It's not being dead per se. I remain of the opinion that being dead per se is not the sort of thing it makes sense to be afraid of, once you've concluded that death is the end. The only thing that it might make some sense to be afraid of is that you might die too soon--earlier rather than later. Of course, having noted that point, we then have to ask, well how much fear is appropriate? How great is the chance that you'll die too soon? Your fear needs to be proportioned to the likelihood. How likely is it that you will die in the next year, or five years, or for that matter 10 or 20 years? The fact of the matter is for most of you, almost all of you, the chances are very unlikely indeed; not quite negligible, but rather small. For a healthy 20-year-old, for example, the chances of dying in the next five or ten years are extremely small, in which case even if some slight fear might be called for, no significant amount of fear seems called for. So, if somebody were to say to me, "look, the facts about death are so overwhelming that I'm terrified of death," all I can say in response is not, that I don't believe you, but for all that it seems to me terror of death is not an appropriate response. It doesn't make sense given the facts. Now, having said that, that doesn't mean that there may not be some other emotion, some other negative emotion that is appropriate. Fear of death strikes me as, for the most part, overblown; it's widespread, I suppose, but for the most part inappropriate. But that doesn't mean that--As I suggested before in working through some of these examples, sometimes anger makes sense; sometimes resentment makes sense; sorrow, regret, sadness, that may make sense. So, in having argued that for the most part fear of death does not make sense, I haven't yet given us any reason to think that there might not be other emotions, negative emotions, that do make sense. So let's ask. What about some of those other possible emotions? What negative emotion, if any, does it make sense to feel about death itself, the fact that you're going to die? Well, of course, look, it's also worth bearing in mind, since I've argued that immortality would be bad, the fact that you will die is not actually bad. It's good because it saves you from the unpleasant aspect of an eternal, dreary, dreadful immortal existence. Still, we might say, most of us, almost all of us, die too soon. So, what about that? We die before life has yielded up all the goods that it could have given us. So what is the appropriate negative emotional response here? Or is there one? I suppose the natural second suggestion is anger. You might say, look, maybe fear isn't right, isn't appropriate, but anger. I'm angry. I want to shake my fist at the universe and curse the universe for giving me only 50 years or 70 years or 80 years even 100 years, when the world is such a rich, incredibly fantastic place that it would take thousands of years or longer to exhaust what it has to offer. So, isn't anger an appropriate response? And again, I think the answer is not so clear that it is, because, like all the other emotions, anger itself has appropriateness conditions. In order for anger to make sense, well, here's condition number one. It seems to me it's got to be directed at a person, it's got to be directed at an agent, it's got to be directed at some thing that had some choice over what it was doing to you. So, when your roommate, whatever it is, spills coffee on your computer, destroying the hard drive or whatever it is, because they were careless, even though you told them previously to be more careful, anger makes sense. It's directed at your roommate, who's a person, who had some control over what they were doing. Your roommate's an agent. If you want to get angry at me for the grades that you receive in this class, well, at least condition number one makes sense; you're directing your anger at an agent, at an individual person who has some control over how I behave, how they behave. Condition number two, I suppose--this may not be all the conditions, but at least a second one is--anger makes sense when, and only when, the agent has wronged you, has treated you in a way that it was morally inappropriate for them to teat you. If your roommate has been doing things that you don't like, but they haven't done anything wrong, anger doesn't make sense. When you are angry at them, you are revealing the fact that you think they've mistreated you. Mistreatment requires the notion of they've behaved toward you in a way that morally they shouldn't. All right, these strike me as two conditions that need to be in place in order for anger to be an appropriate emotional response. Of course, again, we no doubt feel anger in other cases, although typically when we're angry at inanimate objects it's because we've personified them. Your paper is due, you're rushing off to class, you're about to print it out, and your computer crashes, and you get angry at the computer. Well, what's going on there, I suppose, is you've personified the computer. You have fallen into the trap, understandable, natural, of viewing the computer as though it was a person who had deliberately chosen to fail right now, letting you down yet again. And I understand this sort of behavior; I do this sort of thing as well. But of course you can step back. At least, once your anger has subsided, you can step back and say, look, getting angry at your computer doesn't really make sense, because your computer is not a person; your computer is not an agent; your computer didn't have any choice or control. Suppose that--take those two conditions and now ask ourselves, does it make sense then to be angry at the fact that we're going to die? And I suppose the answer is going to be, well, look, who is it, or what is it that you think is the cause of our mortality, or the fact that we only get our 50 or 80 years? Here's two crude, basic alternatives. You might believe in God, a kind of classic, theistic, conception of God, according to which God is a person who makes decisions about what to do. And God has condemned us to death. That's what happens in Genesis, God punishes Adam and Eve by making them die. All right, that's picture number one. Picture number two is you just think there's this impersonal universe, atoms swirling in the void, coming together in various combinations, but there's no person behind the scene controlling all of it. Let's consider the two possibilities. Possibility number one, God. Well, look, if you've got the God view, at least we satisfy the first of our appropriateness conditions. We can say, look; we can say, I'm angry at God for condemning us to a life that's short, that's so inadequate, relative to the riches that the world offers us. That's condition number one. But what about condition number two? Condition number two, after all, requires that God has mistreated us in giving us our 50 or 80 or 100 years. And is that the case? Has God wronged us? Has God treated us in some way that isn't morally justified? If not, anger at God, resentment of God, wouldn't make sense. Suppose your roommate comes into the suite and has a box of candy, and he gives you a piece of candy, and you enjoy it. And he gives you a second piece of candy and you enjoy it. And he gives you a third piece of candy and you enjoy it. And you ask for a fourth piece of candy, and he won't give it you. Has he wronged you? Has he treated you immorally? Does he owe you more candy? It's not clear that he or she does. But if not, then being angry--again, I would certainly understand it if you got angry, in the sense that it's a perfectly common enough response. But is anger an appropriate response to your roommate for giving you something, and then not giving you more? It's not clear that it is an appropriate response. The appropriate response actually seems to me to be, not one of anger, but gratitude. Your roommate didn't owe you any candy at all, and they gave you four pieces, or whatever it was, the number just was. You might wish you could have more, you might be sad that you can't have more, but anger doesn't seem appropriate. God doesn't, as far as I can see, owe it to us to give us more life than what we get. Well, suppose we don't believe in the God theory but the universe theory. Well then, of course ,even condition number one isn't satisfied. The universe is not a person, is not an agent, has no choice and control. And as such, again, it just seems to me that anger then--I can lift my fist and curse the universe; of course, what I'm doing then is, I'm personifying the universe, treating the universe as though it was a person that deliberately decided to make us die too soon. But however common that response might be, it makes no sense rationally if the universe is not a person. It's just atoms swirling, forming various kinds of combinations. Anger at the fact that I'm going to die, or die too soon, doesn't make sense either. Well, what about sorrow? Maybe I should just be sad at the fact that I'm going to die too soon. And I think some emotion along that line does make sense. The world's a wonderful place. It would be better to have more of it. I'm sad that I don't get more, that I'm not going to get more. But having had that thought, I immediately find myself with another thought. Although it's a pity I don't get more, I'm extremely lucky to have gotten as much as I get. The universe is just this swirling mass of atoms, forming clumps of various kinds of things, and dissolving. Most of those atoms don't get to be alive at all. Most of those atoms don't get to be a person, falling in love, seeing sunsets, eating ice cream. It's extraordinarily lucky of us to be in this select, fortunate few. Let me close then with an expression of this thought. This is from Kurt Vonnegut's book, Cat's Cradle. This is a kind of prayer that one of the characters in the novel says--is supposed to say--at the deathbed. God made mud. God got lonesome. So God said to some of the mud, "Sit up." "See all I've made," said God. "The hills, the sea, the sky, the stars." And I, with some of the mud, had got to sit up and look around. Lucky me, lucky mud. I, mud, sat up and saw what a nice job God had done. Nice going God! Nobody but you could have done it God! I certainly couldn't have. I feel very unimportant compared to You. The only way I can feel the least bit important is to think of all the mud that didn't even get to sit up and look around. I got so much, and most mud got so little. Thank you for the honor! Now mud lies down again and goes to sleep. What memories for mud to have! What interesting other kinds of sitting-up mud I met! I loved everything I saw [Vonnegut 1963]. It seems to me that the right emotional response isn't fear, it isn't anger, it's gratitude that we're able to be alive at all. |
YaleCourses_Philosophy_of_Death | 21_Other_bad_aspects_of_death_Part_II.txt | Professor Shelly Kagan: All right. Last time we started asking ourselves about what are some of the other aspects of death that might contribute to its badness, or at least other features of death that are worth thinking about. Conceivably, some of them might reduce the badness of death, in some way. We talked about the inevitability of death; we talked about the variability, that people have different lengths of time before they die. And we turned to a discussion of the unpredictability of death, the fact that because we don't know--we can't predict--how much more time we've got, we may, as it were, pace ourselves incorrectly. You may take on a long-term project and then die before you've been able to complete it; or alternatively, you may peak too soon and then continue to stick around in an anti-climactic way. These are bads of life that could presumably be avoided if only we knew how much exactly we had--how much longer we had. On the other hand, we have to ask ourselves--and this the question that I left us with last time--whether it would really, all things considered, be better to know how much time you had. After all, if you knew--suppose we had the birthmarks that told you when you were going to die--if you had that kind of a birthmark, you would face your entire life with the burden of knowing, I've got 48 years left, 47 years left, 50 years left. I should've been counting down--35,30, 25 and so forth. Many of us would find that was, as I say, a burden--something hanging constantly over us interfering with our ability to enjoy life. Suppose that there were some sort of genetic marker and, although we didn't have a tattoo that you would just have to look at, but you could have genetic counseling--have your DNA examined and you could tell, if you had the DNA testing, how much time you had left. Would you want to get that testing done? Now, that's of course science fiction, and I presume it's going to stay science fiction--though we're on the cusp of having something at least approximating that as we learn more and more about the various genes that carry various diseases, we--more and more of us face the question of whether or not we want to get tested for those diseases. Suppose there was a test. Indeed, one occasionally reads in the newspaper about this sort of thing where you can get tested for such and such a disease. You might know already that you've got a 50 percent chance of having it, but you don't know whether you yourself have it. If you do have it, the disease will always have onset by age 40,50 or what have you. Would you want to have that kind of information? Closely related question. If you did know how much time you had left, how would you act differently from what you're doing now? Would it focus your attention on making sure you did the things that were most important to you? And it's worth--;it's sort of a useful test for asking yourself what are the things you must value in life--to ask, what would you choose to do if you knew you had five years, ten years, what have you? There's an old Saturday Night Live routine where one of the actors is in the doctor's office, and the doctor gives him the very sad news that he's got two minutes left to live. And he says, "I'm going to pack a lifetime of enjoyment into those two minutes." And then of course, the point of the skit is he presses the down button on the elevator and a minute and a half goes by while he's waiting for the elevator to come. If you knew you had a year left or two years left, what would you do with that time? Would you be in school? Would you travel? Would you spend more time hanging out with your friends? A very, for me, extremely striking example of this question occurred in this very class. There was a student in this class some years ago who was dying. And he knew that he was dying. He'd been diagnosed with, if I recall correctly, cancer as a freshmen--and his doctor had told him that he pretty much had no chance of recovery and indeed had only a couple more years to live. Faced with that question, he had to ask himself, "Well, what should I do with my remaining years?" It was astonishing enough that somebody--but perhaps understandable--that somebody in that situation would decide to take a class on death and then have himself, submit himself, to my getting up here week after week, talking about how there's no soul, there's no prospect for an afterlife, it's a good thing that we're all going to die. But faced with the question what should he do, what did he want to do with his remaining couple of years, what he decided he wanted to do was finish his Yale degree--thought he'd set himself the goal of graduating college before he died. And he was taking this class second semester of his senior year. At least, he was taking it until Spring Break. By Spring Break he'd gotten sufficiently sick that his doctor basically said, "You can't continue in school anymore. You've got to go home." Basically, "You've got to go home to die." And indeed, he got progressively and then rapidly deteriorated at that point. The faculty members who were teaching his classes that semester then all faced the question posed to them by the administration, based on the work he's done so far this semester, has he--what kind of grade are you prepared to give him? Because, depending on which of his classes he passed and which of his classes he failed, the question was going to be,was he going to graduate or not? In fact, of course, he did manage to graduate. And Yale, to its, I think, real glory and credit sent a member of the administration down to his deathbed to award him his degree before he passed away. So, as I say, it's a very striking story. I'm not sure how many of us would decide the last thing we wanted to do with our remaining years is to spend it in college. Well, what is it that you'd want to do? And again, to move back and ask ourselves a larger question, would knowing how much time you have be something that would allow you to actually embrace those choices, or would it instead just be a burden? That's the kind of question we have to face when we think about the fact that we don't know how much time we've got. Is that something that increases the badness of death, or does it reduce its significance somewhat? Here's another feature. In addition to the inevitability, in addition to the variability, in addition to the unpredictability, there's the fact that death is, as I like to think of it, ubiquitous. I don't just mean the fact that people are dying all around us, but I mean rather, you yourself could die at any time. There's never any getting away from the possibility that you'll die now. Even if we didn't have unpredictability, I mean rather, even if we had unpredictability, it wouldn't necessarily follow that death was pervasive in this way. The point I've got in mind here is this--even when you think you're perfectly safe, you could of course die of a stroke. You could die of a heart attack. Even somebody who's young could have an aneurysm. Or one of my favorite examples, you could be sitting in your--you read this sort of thing in the newspaper periodically--you could be sitting in your living room when suddenly an airplane crashes into your house, killing you. These sorts of things happen. You thought you were safe. You were watching reruns on television--the next minute, you're dead. The fact that you could die and you don't know when you're going to die doesn't yet entail that you could die at any minute, at any moment. But in fact, that's true of us as well. Yet another example close to heart. I remember--before I taught here I used to teach at the University of Illinois at Chicago. And once I was driving down the highway and a car pulled in without looking and clipped my car, and caused my car--you know, so pulled in from the entrance--caused my car to go careening across three lanes of traffic spinning out of control. And I remember quite clearly thinking to myself as that happened--the whole thing lasted only a few moments--but I remember thinking quite clearly, "I'm going to die." Now, as it happens, I didn't die. I walked away from the accident, and the damage to my car was rather minimal. But it could've have happened like that. Death is--the possibility of death--is ubiquitous. It's pervasive. We have to ask ourselves then, does this make things worse? It certainly feels, to my mind, as though it's an extra bad about the nature of death. It would be nice to get a breather. Imagine, if you will, that there were certain locations, certain vacation spots, where as long as you were there you couldn't die. Wouldn't it be nice to be able to go someplace and just for a period think to yourself, "Well, you know, right now I don't have to worry about that. It doesn't even have to cross my mind." Maybe if there were these sort of death-free zones, they'd get rather crowded. So perhaps we should change the example. Instead of having death-free zones, imagine that there were death-free times. Just suppose, for whatever reason, nobody could die between twelve and one. You could just put it out of your mind. Wouldn't that be nice? All right, one o'clock, you take the mantle back on. But wouldn't it be nice to just have a certain period of time every day when you didn't have to even have it be so much as a remote possibility? Or suppose there were certain death-free activities. Maybe reading philosophy would be something that as long as you were doing it you couldn't die or, as perhaps some religious traditions might've taught, as long as you were engaged in prayer you couldn't die. Wouldn't that be nice? Or turn the entire thing the other way around. Suppose that most times and most activities were death free, but certain activities introduce the possibility of dying. So you couldn't die unless you were engaged in certain activities. So you would be immortal but not in the sense of immortal against your will. There'd be certain activities perhaps, for example putting a gun to your head, that would put an end to your life. So even if immortality would be bad, there would be certain things you could do that could end it. Ask yourself, what sorts of activities would you engage in if you knew that those activities carried with them the risk of dying? So most of the time you couldn't die. What things would be so important to you that you'd be willing to suddenly risk death for the sake of doing those things? You like art. Is art important enough to you that you'd be prepared to watch, look at a masterpiece, if you knew that while you were enjoying it you could die, but that wouldn't happen otherwise? Is sex great enough that you'd be prepared to run the risk of dying while you were engaged in sex? Again, it's a nice lens for asking yourself, what are the things that are most valuable to you? by asking, which of them are so valuable you'd be prepared to do them even if they would introduce what isn't otherwise there, namely, the risk of death? Now, in the posing the question that way, I've been assuming that these are things you'd do despite the fact that they run the risk of death. I suppose there's a further question we have to ask, are there things that would be worth doing precisely because of the fact that they introduced the risk of death? Now, I've got to admit that when I pose that question, that sounds rather bizarre. At least, putting aside the possibility that we've now lived our hundred thousand years and have exhausted all that life's got to offer for us, certainly to engage in activities now, while life still has so much more to offer--to engage in activities now where, precisely for the chance of dying, that strikes me as bizarre. And yet, it seems to me that there are many activities, and if not many at least several activities, that people do precisely for that reason. For example, let me tell you something I know that's going to shock you. Did you know there are people who jump out of airplanes? Now, admittedly they've got this little piece of cloth that gives them a decent chance of not killing themselves when they jump out of airplanes. But these things do fail. Every now and then you read in the newspaper about somebody whose parachute failed to open and so they died. And I ask myself, why? What could possibly drive somebody to jump out of an airplane with nothing but a little piece of cloth between them and death? And the answer that strikes me as most plausible is, it's the very fact that there's a significant chance of death that helps explain why people do this. Now, I know if you talk to some of these people, they'll often say, "Oh, no, no, no. The views are so glorious," or something like that. But I think this is rather an implausible suggestion because, of course, you could have these glorious views just by going up in the airplane and looking down from the safety of your airplane. Part of the thrill has got to be--or so it seems to me--part of the thrill has got to be the very fact that they now have an increased risk of death. The chance of dying is part of what drives somebody to jump out of an airplane. Well, if that's right, then should we say that the pervasiveness of death, ubiquitousness of death--the thing that I was earlier suggesting was oppressive--wouldn't it really be nice to have a death-free time or a death-free location or death-free activities? Maybe I was wrong in suggesting that. If the chance of death would add a kind of zest, then perhaps the ubiquity of death is actually a good thing rather than a bad thing. Well, I'm inclined to think, at least in my own case, that that's not right. And perhaps the explanation has got to be the ubiquity of death is this kind of background, constant hum. And the fact that we're always facing some risk of death recedes into the background in the way that most of us don't hear background noise--that what jumping out of an airplane does for you is it spikes the risk of death. So, it's not really good enough to just have some risk of death--it's got to be greater risk than usual. If that's right, if that's the psychology, then even for those death thrill seekers, the ubiquity of death won't necessarily be a good thing because of it being constant. It just recedes into the background. All right. So again, what I've been asking us to think about are various aspects of death that might contribute to either increase or perhaps in certain ways reduce somewhat the badness of death. There's one more aspect that I want to take a couple of minutes and have us think about, and that's this. Previously to this most recent discussion, I talked about the value of life. Some rival theories about what makes life worth living. And for the last lecture or so I've been talking about, in addition to the deprivation account, the additional things that contribute to the badness of death. So you might think, well, what about the human condition as a whole? What about the fact that it's not just that we live, or for that matter it's not just that we die. What's true about humans is that we live and then we die. That's the human condition--life followed by death. You might ask, what's the value of that entire combination? Now, the most natural thing to suggest would be, well, you get clear on your favorite theory about the value of life, whatever that is. You get clear about the kinds of questions we've just been asking about the badness of death, whatever that is. What's the overall assessment of the human condition? You might think, well, that's just a matter of adding up the goodness of life and subtracting the badness of death and summing whatever it comes to. I suppose, again, the optimist says, "Yeah, death is bad, but life is good, sufficiently good to outweigh the badness of the fact that we're going to die. On balance, it's a good thing to be born." And pessimists might be those who say, "No, no. On balance, the negative of death outweighs the positive-ness of life." But I want to pause for a moment and note that this assumption that the way to think about the value of the combination as just a matter of adding the goodness of life and the badness of death and just summing them that way--that may not be right. Because sometimes the value of a combination is different than the value you would get by just thinking about each one of the parts in isolation and then adding them up. A kind of addition approach to values of wholes may not always be correct. Here's a nice simple example to make that point. My two favorite foods in the world are probably pizza on the one hand and chocolate on the other. I know I've shared my love of chocolate with you before. I don't recall having shared my love of pizza with you before, but there it is--two favorite things I love--love pizza, delicious, love chocolate, delicious. Take these two delicious things and combine them into a chocolate covered pizza. Oh my God! The whole idea just sounds disgusting. And it is, I take it, disgusting. But you wouldn't notice the disgustingness if you just thought about the value of pizza in isolation and the value of chocolate in isolation. The value of chocolate-covered pizza is not just a matter of summing up the value of the parts taken in isolation. You've got to think about what we might dub "the interaction effects." So let's ask ourselves, are there any interaction effects when we talk about the human condition that it's life followed by death?" We've thought about the value of life in isolation; we've been, in effect, thinking about the value of death in isolation. Does the fact that death follows life--does that produce any interaction effects between the two, which need to be added into our formula--added into the mix as well? Well, there's obviously, I suppose, two possibilities. Well, really three. Possibility number one is, no it doesn't make any difference--uninteresting possibility. More interestingly--two remaining possibilities. Yeah, there are actually some ways in which the combination ends up becoming worse. The interaction effects make things even worse, and we can't overlook those negative interaction effects. Also, the possibility that there might be some positive interaction effects. Let me start briefly by mentioning a possibility for a positive interaction effect. Because of the fact that you're going to die, obviously enough, it's not just that you'll get whatever life you get, but there's a finite amount of life that you're going to get. Life is a scarce resource. It's precious. And we might be attracted to the thought that the value of life is increased by its very preciousness. There's a kind of aspect of value for many of us where we feel that something's especially valuable if it won't endure, if it's fragile, or if it's rare. This can enhance the value of something. And so, arguably, the fact that life is precious, that it won't endure, could actually increase its value for us. There's a short story by the science fiction writer Orson Scott Card, where the basic point of the story is that of all the life forms in the universe, we, here on Earth, are the only ones that are mortal. And because of this we are the envy of the rest of the universe. It's not so much that immortality, what the rest of them have, is unattractive or boring. It's perfectly fine, but they envy us for our finite lifespans, because what we've got and they don't have is something that's for each individual rare--something that's not lasting, something that's precious in that way. All right, it's a possibility. So, it's possible that the very fact that we're going to die causes an interaction effect with our life so there's an upside to it. It makes our life fragile, ephemeral, and as a result of that, more precious. But it's also possible--actually compatible with accepting that fact--there are two additional possibilities, that there might be some negative interaction effects. It could be that in thinking about the nature of the combination we're led to see that in certain ways the combination--the interaction effects--are negative, are bad ones. Well, here are two possibilities for that thought. First possibility I think of under the heading "A Taste is Just a Tease." It's as though we live life for a while, getting a feel for all the wonderful things life could offer us, and then a moment later, as it were, it's snatched away from us. It's sort of adding insult to injury that we're offered just a whiff. It's as though somebody brought in this delicious meal to a hungry--before a hungry person--allowed them to see what it looked like, allowed them to smell the delicious aromas, perhaps gave them just one little tiny forkful to see just how beautifully delicious the food was. And then they snatched the whole thing away. You can imagine somebody who says, "Look, it would be better never to have had the taste at all than to have the taste and then not be allowed to have the entire meal." That's something that you might not notice if you just focus on the intrinsic nature of the taste. After all, the intrinsic nature of the taste was positive. Or, if you just focused on the intrinsic character of the not-having the meal. After all, not having the meal is just an absence of a certain experience. To capture what's excruciatingly undesirable about the two, you need to think about the two in combination. It's an interaction effect. And we might think, look, this is one of the negative things about the human condition that we get a taste of life--nothing more--before it's snatched away. That's one possibility. The second possible thought that comes to mind for me, in thinking about the negative interaction effects, I call under the title--I think about under the title--"How the Noble Have Fallen." Right now, there's something amazing about us. We are people. In the universe we--Who knows what is out there in the universe but at least on Earth we may well be the only people there are. Now, who knows? Maybe dolphins or certain--some of the great apes. But at any rate, it's a rather select club. We are, as I said, early in the semester when I said I'm a physicalist, I believe that people are just machines, but we're not just any old machine. We're amazing machines. We're able to love. We're able to write poetry. We're able to think about the farthest reaches of the universe and ask what our place is in the universe. People are amazing. And we end up rotting. We end up corpses. There's something--For many of us, there's something horrifying about the thought that something as amazing as us, as exalted and valuable as us, could end up something as lowly and unimportant as a piece of rotting flesh. Again, think about it. The image here that comes to mind for me is one of these deposed kings who ends up waiting on tables to make a living in New York. And it's--you might think, "All right. The life of a waiter is not the worst thing in the world." But there's extra, again, insult to injury, when the person's got to remember that he used to be something extraordinary, a ruler. Again, if you just thought about life as a ruler, well pretty good thinking about it in isolation. Life as a waiter, not so bad thinking about it in isolation. To see the nature of the problem you've got to think about the fact that it's a combination package. There is something especially insulting about having gone from king to waiter. How the mighty have fallen. And that fate is waiting for all of us. It's a fact about the human condition that the amazing things we are don't stay amazing. We turn into pieces of rotting flesh, decaying. So two possible negative effects--the taste is just a tease, the how the noble have fallen--on the one hand. One possible positive effect, the extra preciousness of life. I'm not quite sure where, on balance, we should say how these things play out. Again, I suppose we could have different views. On the one hand, the optimists might say, "Even when we throw in the extra interaction effects, even the negative interaction effects, the overall nature of the human condition is positive. So that it's a good thing to be born, even though your life is going to be followed by death." And against that, we could have the pessimists who say, "The negative side, especially once we throw in the negative interaction effects, the negative side is so great that it would be better never to have been born at all." That's the pessimist view. Given that we're going to die, this fact seeps back in and poisons the nature of life or perhaps poisons the nature of the whole, life followed by death, so that on balance the whole thing's negative. Better to have not had any of it, better to have not been born at all, say the pessimists, than to have this combination package of life followed by death. Now, for myself, I'm sufficiently optimistic that I'm inclined to think life's wonderful. The negative combination effects that I was talking about are certainly there, but on balance I think the human condition for must of us is a good one. It's better to have been born than never to--even though that's followed by death--than never to have been born at all. But I do want to emphasize the point that even if we were to accept the pessimist's conclusion that it would be better never to have been born at all, it doesn't follow, at least doesn't follow without further argument, that the right response to the realization--if it is the correct realization that it would be better never to have been born at all--doesn't follow that the right response is to commit suicide. It's a tempting thought right? To go philosophically from life's so bad given the nature of the human condition, life followed by death, that better to never have had any of it than to have just had a taste and a tease and so forth. But it's a tempting philosophical thought to say, "Once I've shown it's better never to have been born, it follows that suicide is the appropriate response." But in fact, as a matter of logic, that doesn't follow at all. Because if you think about it, suicide doesn't change the fundamental nature of the human condition, life followed by death. It's not as though if you kill yourself you somehow bring it about that you've never been born at all. It's still the case that if there's something horrible about having just a taste--well, indeed, if you commit suicide you've made it an even shorter taste. If there's something sort of degrading or unnoble about being a person who is going to become a corpse, committing suicide doesn't alter that fundamental fact either. It just makes the insult come sooner. So, even if we were to agree with the pessimists that it would be better never to have been born at all, as the old joke goes, show me one person in a thousand who's so lucky, right? We have all been born. And from the fact, even if we were to agree with it, that it would've been better if we hadn't been born--instead of feeling sorry for unborn Larry, perhaps we should envy unborn Larry; that's what the pessimists say--even if that were true, it wouldn't follow that suicide was an appropriate response. It doesn't mean of course that suicide isn't ever an appropriate response. We're coming on toward the end of the semester, and the last topic we'll be talking about is indeed the topic of suicide. When, if ever, is suicide an appropriate, rational or moral response to one's situation? Let's hold off on thinking about that question a bit further. Before we get to suicide, you might say, the question that's going to entertain us for the remaining few weeks is this. How should one live, in light of the facts about death that I've been laying out in the semester up to this point? How should we live, in light of the facts about death? And one possible response, the last one we'll look at, is, what you should, at least sometimes, is kill yourself. We'll come to that. We're going to spend the next couple of weeks asking ourselves different aspects of the question, what should our response be to the fact of our death and the specific features of death and the nature of death that we've been exploring? But the very first question I suppose we really need to ask is this. Should we be thinking about all this at all? Well, I realize that for you guys it's too late, right? It's sort of late in the day for students who have been through the better part of a semester thinking about the nature of death to argue, maybe, it wasn't such a good idea for you to take this class in the first place. But as theorists, we could be interested in the theoretical possibility that the right response is to not think about the facts of death at all. Look, in principle I suppose there are three different reactions. So, I make various claims of the sort that I've been making about, "Well look, you know, we're just physical objects. When these objects break, we cease to exist. The objects don't get put back together," and so forth and so on. One possibility, of course, is simply to disagree with me about the facts. And so you--of course, if you do disagree I think you're mistaken, so I'll think of you as denying the facts, but all right, that's a possibility. Another possibility, the one I'll turn to a little bit later, is admit the facts and live accordingly. Of course, we haven't yet asked ourselves, how should you live if you recognize and take into account those facts? That's the question we'll turn to. But there's the middle possibility, which is not so much think about them and deny them, not so much think about them, accept them and act accordingly, but simply don't think about them. Maybe the best response to the facts of death is just put it out of your mind. Don't give it any thought at all. Now, on the one hand you might think, that can't possibly be the right response, the appropriate response. After all, how can it be appropriate to disregard, to put out of your mind, facts? Well, that all sounds very nice, but I think that claim has got to just be mistaken. There's nothing unacceptable or inappropriate or misguided about not thinking about all sorts of facts that you might have learned at some point or the other. Here's my favorite example of stupid facts I was forced to learn when I was younger--state capitals, right? I've gotten pretty far in my life, and as far as I can tell I've never, ever, ever had to remember the capitals of the 50 states. So, I just don't think about it. Pretty much I think about it only once a year, when I'm giving this very lecture. I start asking, how many state capitals can I remember? And the answer is, really not all that many of them. Not thinking about those facts that I knew at one point--just not all that objectionable. So, the mere fact, if it is a fact, suppose the facts about life and death are as I've described them. Until we say something more, it's not clear that we shouldn't just, all right, note it, store it away, and forget about it, just like the facts about the state capitals. That seems odd; that seems misguided. But why? What is it about the facts about life and death that seem to make it misguided to think we should just put them aside and pay no attention to them? Presumably because we're led to the thought, we're attracted to the thought, that the nature of death, the facts about death--whatever they are--should have an impact on how we live. The appropriate way to live gets shaped, at least in part, by the fact that we're going to die, that we won't be around forever. If that's right, then it seems as though there'd be something irrational and inappropriate about simply disregarding those facts. Let me tell you two stories that might--well, look, before I tell you the stories here's the other side. Suppose somebody said, "Yeah, it's true if I thought about the nature of death, the fact that the 50,80, 90 years I've got on this Earth is all I'm going to have. If I thought about that fact, it would just be overwhelming. It would be crushing. I'd be unable to go on with my life." People sometimes claim that that's the case and, because of that, the right thing to do is to not think about it. You've read at this point, long since, Tolstoy's Death of Ivan Ilych. The people in the Tolstoy story seem to have put facts of mortality out of their mind. Why? Presumably because they think that facing it is just too crushing and overwhelming. So the way they cope with it--they think the appropriate response is put it aside, disregard the facts about death. Well, as I say, there seems to be something amiss about that reaction. That was certainly the point that Tolstoy was trying to get us to see. There's something wrong about lives, something inauthentic about lives that are lived without facing the facts of our mortality and living accordingly, whatever the appropriate responses might be. Here are two stories not having to do with death per se that may help us get a feel for the oddity of trying to disregard these facts. Suppose that you're on a hot date, or about to go out on a hot date, with Peggy Sue or, depending on your preferences, Billy Bob. And your roommate holds up an envelope and says, "Written in this envelope are certain facts about Peggy Sue or Billy Bob. I'm not going to tell you what these facts are yet. They're in the envelope. But I'll give you the envelope and you can open it up and read them. But I do want to tell you this one thing. It is indeed the case that if you were to read these facts, if you were to think about these facts, if you were to know the things written down in the envelope, you would not want to go out with Peggy Sue." And you say to yourself, well, let's see. Right now I want to go out with Peggy Sue, but if I knew these true--It's not that you think, oh your roommate has made it up, that these are lies; these are slander. You really believe, and it is in fact the case that the things written down in the envelope are true. And so you know that if only you were to read these things in the envelope, you would change your mind and no longer want to go out with her. And so what you say is, "Don't show me the envelope." That seems odd. It doesn't seem like it makes sense. If there are things that would change your mind and you know that they would change your mind about your behavior, how can it be rational to disregard them? Here's another story. You're about to drink a milkshake, and your roommate comes rushing in and says, "I've got the lab report. I had my suspicions about the milkshake, and so I took a sample and I rushed it down to the lab. I've got the lab report." You're about to drink it, right, because you're thirsty, it's a hot day, you love milkshakes. And your roommate says, "Inside the envelope are facts about this milkshake that if--I promise you it is indeed the case--if you knew these facts, you would not drink the milkshake anymore." And you say, "Oh, thank God. Don't open the envelope," and you drink the milkshake, disregarding the facts. That seems inappropriate. Well, if it really was true then that if only we faced the facts about our mortality that we would live life rather differently, how could it be reasonable for us to disregard those facts? Well, that's the puzzle. Or maybe we shouldn't call it a puzzle at all. Maybe the answer is, that just shows the disregard option is not really all that reputable. What we either have to do is deny the claims I'm made about the nature of death, or else go on to ask--supposing they are true--how should we live in light of them? Maybe the disregard option just is one that we can't actually take on as an intellectually acceptable alternative. But I suspect that that's probably a little bit too quick, because really there are two different ways in which facts could influence our behavior. And if we're not careful we'll disregard this distinction, even though I think it's an important one. Here's the two ways. On the one hand, it could be that certain facts, if you knew them, would cause you to behave differently without actually giving you any reason to behave differently. That's possibility number one. Possibility number two is the facts change your behavior by giving you a reason to behave differently. Let me show you an example of the first possibility, because that's the one I think we may be overlooking when we assume that disregarding can't ever make any sense. So, there you are kissing, making out with Peggy Sue or Billy Bob--whoever it is--and your roommate bursts in and says, "I have in the envelope certain facts such that if you were to think about them you would no longer want to kiss Peggy Sue, Billy Bob." Let me just tell you what the facts in the envelope are. They're certain facts about the nature of Peggy Sue's digestive system. Now, well, you're making out after having had dinner, and while you're sitting there making out, food is making its way down Peggy Sue's digestive tract, being turned into shit. And eventually it's going to be excreted. And if you started picturing to yourself the feces inside Peggy Sue's digestive tract, and the fact that she's eventually going to be wiping the feces off of her behind, you might find it difficult to continue to engage in making out with Peggy Sue. It's not so--now these are just facts, right? I didn't make any of these up, but there you are, as I'm talking about them, you're just being grossed out as I describe them. Now, do any of these facts about the digestive system make it inappropriate to kiss another human being? Well, of course not. But for all that, thinking about those facts make it rather difficult, while you're thinking about the facts, to continue enjoying kissing the person. So there are certain facts about the digestive tract such that if you think about them you can't do something, kiss the person. But for all that, it's not because you've got any good reason not to kiss the person. It's not that the facts about the human digestive process give you reason not to kiss her. They cause you to change you behavior without giving you any reason to change your behavior. So, when the roommate comes running in, holding the envelope, and says, "I have in this envelope certain facts such that if you read these facts, and thought about these facts, you would stop kissing this person," the question you should put to your roommate is, "Are these facts that would merely cause me to change what I'm doing, or are these facts things that would give me some good reason to change?" If these are facts about how Peggy Sue likes to kiss and tell, or then goes around and talks about who's a good kisser and who's a bad kisser, maybe that gives you a reason to not continue what you're doing. So the facts could be things that would give you reason to change your behavior. But the mere fact that they would change your behavior doesn't yet tell you whether they're reason-generating facts. If they're mere causes and not reasons, then maybe it's perfectly okay to disregard them. If your roommate comes in and starts trying to tell you facts about the human digestive system, you say, "Not now." Disregarding is sometimes the appropriate thing to do. Well, what about the facts about death? Are the facts about death things that it's appropriate to disregard? A bold claim would say, "Yes." A bold claim would say, "The facts about death, if I thought about them, would change my behavior, but not because it would give me a reason to change my behavior--simply because it would influence my behavior." And, given that, we might say, better to not think about them. That would be the bold claim to make at this point. Suppose, for example, that, the right way to live, in light of the facts about death, is to live life to the fullest. But suppose if you think about death you just get too depressed and you can't live life to the fullest. It's not that the facts about death give you reason to stay in your room and sulk. It's just that the facts about death cause you to stay in your room and sulk. If that was the case, then disregarding, always disregarding, the facts about death might well be the appropriate response. Well, that would be a rather bold claim. I'm not inclined to believe that the bold claim is right. Should we conclude therefore that, no, you should always be thinking about the facts about death? No, I'm inclined to think that that other bold claim, on the other side, is probably mistaken as well. So there you are, one more time, one last time, making out with Peggy Sue or Billy Bob and your roommate comes in and starts trying to tell you about the fact that he's taken Shelly Kagan's class on death or he's been studying in some biology class, and he wants to tell you about how human bodies decay when they turn into corpses. As he begins to tell you this story, you start picturing Peggy Sue as a rotting corpse. Suddenly, you don't really feel like kissing her anymore. It's sort of like the digestive tract story. It's not that, as far as I can see, the fact that she's going to be a corpse gives you any reason not to kiss her. It's just that thinking about the fact that she's going to be a corpse causes you to not want to kiss her, not be able to enjoy kissing her. So, I'm inclined to think that the right position here is a kind of moderate one, a modest one. There are times and places for thinking about the facts of death. When you're kissing somebody--that is not the time and that is not the place. The position that says, you should always have the fact of your mortality forever before your mind's eye--I think that's misguided. Similarly, though, anybody who says, you should never think about the facts of mortality and the nature of death--I think that's misguided as well. There's a time and place. But that still leaves us with the question. All right, so suppose this is the time and place. If ever there was a time and place for thinking about the facts of death and how it should influence our life, it's right now, in a class on death. So, we still have to face the question, how should you live? What is the appropriate response to the facts about life and death? That's the question we have to turn to next time. |
YaleCourses_Philosophy_of_Death | 23_How_to_live_given_the_certainty_of_death.txt | Professor Shelly Kagan: At the end of last class, I quoted some words from Kurt Vonnegut, a kind of deathbed prayer confession that he'd written in one of his novels in which the basic gist of the prayer is to express gratitude. Whatever the content of your life, the fact that at least you've been able to live at all--As he put it, most mud isn't lucky enough to sit up. He feels lucky to have been some of the sitting-up mud. He loved everything he saw. When I read that quote, I did not know that Kurt Vonnegut had died the night before. Immediately after the class ended, a visitor to the class brought this fact to my attention. So, I can't pass without commenting on that death, and just remark that I hope that to the very end, Kurt Vonnegut, who lived until he was 84, realized how lucky he was to be some of the sitting-up mud. The question I want to turn to now is this. So, we've been going over the various facts about the nature of life and death. And the question then is, how should we live, in light of the fact that we're going to die? Previously, we've talked about what emotional response we should have to that. And I've argued, as I just reminded us, that although perhaps the most common reaction is one of fear or terror at death, it may in fact be that we should be grateful and consider ourselves lucky that we were able to have had life as well--life at all. But how, then, should we live in light of the fact that we're going to die? And the immediate answer that comes to mind seems almost like a joke. I want to say, well, we should be careful, given that we can die, that we will die. There used to be a TV show, a cop show called Hill Street Blues. The show began every day with the sergeant going over the various crimes and investigations that were going to fill up the day's episode. And he'd always end, as he sent off his police, the cops. He'd end by saying, "Be careful" or "Be careful out there." But the particular kind of care that I have in mind isn't just this pure fact, that if you're not careful, you won't notice that the car's coming down the street and you'll hit by the car and that'll be the end. The fact that we're going to die intuitively seems to require a particular kind of care, because, as we might put it, you only go around once, right? You don't get to do it again. And so, it seems as though the fact that we're mortal, the fact that we've got a finite lifespan, requires us to face the fact that intuitively we can blow it. We could do it wrong. Now, the nitpicky part of me wants to point out that it can't be mortality, per se, that has this implication. Even if we lived forever, we could still do it wrong. After all, whatever it is you've filled your life with, with an immortal infinite life, there's still going to be the particular pattern of actions and activities that you engage in. And that particular pattern could still be one that wasn't the best pattern that was available to you. So, the possibility of having blown it, of having lived the wrong kind of life, is a possibility that's going to be true of us, whether or not we're mortal. And yet, for all that, it seems as though mortality adds an extra risk, an extra danger of blowing it. Look, suppose we lived forever and just have a kind of simplistic example. You might say, imagine somebody who spends his eternity counting the integers--1,2, 3,4, 5,6. Well, that might not be as valuable as an eternity spent doing something else, let's say, doing more complicated math. But still, if you've spent a million years or a billion years counting the integers and then realized that was sort of pointless, you could always start over by doing more interesting, more deep, more worthwhile math. The immortality gives you a chance of starting over. It gives you the possibility of do-overs. We might then worry that what's especially bad about death, the fact that we're mortal, is that it robs us of the chance of do-overs. But of course, that's not quite right either. Even if you don't live forever, you live 80 years or 100 years, you have the chance to reappraise your life at the age of 20 or 30 or 50 and decide you need to change course. So, it's not exactly as though the possibility of do-overs disappears by death itself, via death itself. Still, the thought that death comes when it does seems to push us in the direction of thinking we've still got to be very careful because, of course, given that we're mortal, we have only a limited period of time in which to do the do-overs. There are two kinds of mistakes, really, that we might catch ourselves in. We might discover, on the one hand, that we made some bad choices in terms of what we were aiming for. And on the other hand, we might find even if we made the right choices in terms of our goals, we flubbed it in terms of actually accomplishing what we were trying to accomplish. And so we literally have to start over again, and try again. So, there's two kinds of care that we have to take. We have to be careful in our aims and we have to be careful in our execution of our aims, because we have, as it were, a rather limited amount of time to do it over. Now, again, the nitpicky part of me wants to say strictly speaking, it's not the fact that we are mortal, per se, that all by itself means we have to be especially careful. After all, suppose there just weren't all that many things worth doing. And suppose they weren't all that complicated, all that difficult to do well. Suppose there were only five things worth doing. And even if you couldn't necessarily do every single one of them right the first time out, at most it would take two or three tries. And by a try I mean maybe an hour or two. Well, that would be a pretty impoverished world that could only offer us that much. But after all, if that was the way the world worked and we had a hundred years, we wouldn't really have to worry all that much about being careful. We'd have plenty of time to aim for each of the five things worth having and plenty of time to get each one of the five things right. A hundred years of life would be more than enough. We wouldn't have to be careful. So, it's not just the fact that we're mortal that requires us to be careful. It's the fact that we have a relatively short span of life relative to how much there is worth aiming for, and how complicated and difficult it can be to get those things and get them right. It's because of the fact that there's so much to do and doing it properly that we have to be careful. We just don't have enough time to flail around, try a little of this, try a little of that. Somebody who lives like that may well find that the things they aimed for weren't really the best choices. You don't have to decide that these things weren't worth having at all, given the relatively short period of time we've got. We've got the extra burden of deciding what are the things most worth going after. And we have to face the prospect, the chance, that we'll look back and discover that we didn't make the best choices there. We aimed for the wrong things, not necessarily things that weren't worth having, but given the limited number of things we were going to be able to fill our lives with, in that sense, the wrong choices. And we may discover as well that we were not sufficiently careful, attentive in how we tried to achieve these things. Because it's not as though--although given the way life is, you've got the chance for do-overs, you don't have time for a whole lot of do-overs. And so what death forces us to do is, to be careful. An analogy that comes to mind here is an artist who goes--a musician who goes into a recording studio. And look, he can start trying to record his songs to cut an album. And he may only have a certain number of songs in his repertoire. And so if he's got a long enough period of time, a month in the recording studio, he's got plenty of time, or she's got plenty of time, to sing a couple of songs. Maybe these wouldn't be the best things to record. Let's give it a try and we'll see. Didn't get it right the first take. Let's record it again. Let's try it a third time. Let's try it a fourth time. If you've got enough time, it's less pressing to get clear before you start, or as you're going along, what are the songs I should try to record, and can I get it on one take, or at most two? But if instead of having a month in the recording studio, you've got only a week in the studio, or a day in the studio, suddenly everything's much more pressing. Time is much more precious. You've got to decide early on just which are the songs that it makes sense to record? And yeah, there are some other songs, but these seem to be the better choices. And when you record them, you can't be as careless and inattentive as you try to get them down. You've got to try to get it right the first time, or at worst, the second time. That's, it seems to me, the situation we find ourselves in, not just given the fact that we die, but, we might say, given how incredibly rich the world is, how many things it offers us, how many choices we have in terms of what's worth going after. But for many of these things, given how difficult they are to accomplish, although we've got the chance for do-overs, both in terms of changing our mind about what we should be aiming at, and trying again, for the things we have aimed at, we've got to be careful. The fact about our death requires paying attention. It requires care. Well, having said that, of course, the immediate question then, is all right, so I'm paying attention. I'm trying to be careful. What should I do with my life? How shall I--What should I fill it with? We've, previously in the class, talked about the possibility that being alive, per se, may have some value. But above and beyond whatever stand we take on that, it's certainly also the case that part of what adds to the value of our lives are the contents of our lives. And so we need to ask, well, what kinds of contents should we try to fill our lives with? Now, I won't try to answer that. To ask the question, what are the things really worth going after in life? is to come up to the edge of asking, well, just what is the meaning of life? What's really worth going after? And although that is indeed an important, perhaps the important question, it's the question, I think, for a different class. And so having come close to the edge of that question, I'm going to now back away from it. But still, it seems we might say, in broad strokes, there are two different strategies that we could adopt. And it's worth at least pausing to think about these two strategies. Strategy number one says given that you've only got a finite amount of time--Actually, the basic underlying thought behind both strategies is just this. We haven't got much time. Pack as much as you can into life. Pack as much as you can in. But there are two basic strategies about how do you put that idea into practice. And strategy number one says given the dangers of failure if you aim too ambitiously, you should settle for the kinds of goals that you're virtually guaranteed that you'll accomplish. The pleasures of food, company, sex, ice cream. One of the paper topics asks you to reflect on the philosophy, "Eat, drink, and be merry, for tomorrow you die." Well, that's one of the strategies. We're going to be dead tomorrow. And so while we're here, let's try to pack in as much as we can, by going for the things that we've got a very high chance of actually accomplishing. Strategy number two says that's all well and good. You've got a pretty high chance of succeeding at that. The trouble with strategy number one is the goods that you can achieve, the sort of sure thing goods are small. They're rather small potatoes, as things go. Some of the most valuable goods in life are things that don't come so readily, don't come with guarantees of achieving them. You might want to write a novel, compose a symphony, or for that matter raise--marry and raise a family. Some of these things, strat--fans of strategy number two argue, these things are the most valuable things that life can offer us. So that a life filled with these larger goods is a more valuable life than a life filled with the small potatoes goods. I suppose fans of the "Eat, drink, and be merry" strategy don't like to call those "small potatoes goods," but that's the kind of language that might be offered by fans of strategy number two. And it seems to me that as a claim about which life, if only you had it, if you had a guarantee--If God were going to say, "Look, which life do you want? I promise you'll get it. The life filled with food and drink or the life filled with accomplishment?"--perhaps most of us would say well, it's the life filled with accomplishment that's the more valuable life. The trouble, of course, is, the life with the greater accomplishments, the life aiming for greater accomplishments, is also a life with a greater chance of failure. You aim for writing the great American novel and ten years later, you still haven't finished it. Twenty years later, you decide you don't have it in you to write the great American novel. You try to produce a business and it goes under. So, what's the right strategy to take? I suppose many of us would be inclined to say, well, the third strategy. There's a third strategy that's the obviously right thing to do, which is, get the right mixture. Aim for a certain number of--what should we call them? Large potatoes. Aim for a certain number of the large accomplishments, because if you do manage to get them, your life will have more value. But also throw in a certain sprinkling of the smaller things, where you're at least assured of having gotten something out of life. Well, that's all well and good as well, but it just now brings us to the next question. What is the right mixture, after all? Well, I'm not going to try to answer that one either. But again, those of you who choose the topic, the "Eat, drink, and be merry" question, basically I'm inviting you, in that topic, to reflect on that question. Here's a different thought. The entire, as I said, the underlying thought behind the go-for-the-big-things, go-for-the-small-things, was pack it all in. The underlying thoughts seem to be, look, as long as you've got a life that's got valuable contents, the more, the better. You might say here's common ground between the two strategies--the more, the better. Now previously, we've--I've argued that immortality would not actually be a good thing. Eventually rich and incredible as the world is, eventually, the goods of life would run out and immortality would be dreadful. But having said that, that's not to suggest that we--most of us--come remotely close to that condition. For most of us, it's certainly true that dying at 30 deprives you of goods that would have come to you, if only you'd lived to 40. And dying at 40 deprives you of goods that would have come to you if only you'd lived to 50 or 60 or 80. So, one thing that we're inclined to agree is, other things being equal, the longer your life, the better. So here's a life, 50 years long. And suppose you live it with a certain amount of value in your life, 100 value points, whatever that is, whatever the--however our units of measuring just how good a life is. We'd say, look, better to have a life at that value, instead of going through 50 years, went for 100 years. Fair enough. We might say, we all agree, don't we, that quantity of life's a good thing. And that does seem plausible. But at the same time we'll want to immediately say quantity of life may matter, but it's not the only thing that matters. Quality of life matters as well. And again, that point's fairly uncontroversial. If you had to choose between your life of 50 years at 100 value points or 50 years at whatever that is, 130 value points, you'd rather have the second life. The length of life isn't the only thing we care about. The overall quality of your life is something we care about as well. And this, of course, is another topic that we've talked about previously. Just what is it that goes into making a life better than another? So, we now see, summing it up, yeah, got to pay attention to quality, got to pay attention to quantity or duration. Of course, the reason I just corrected myself is because you might say, if you want to think about it mathematically, it all is just a matter of quantity. As long as when we measure quantity, we bear in mind we need to measure not just the length of the life, but the height of the box. So, the area of the box here is 50 x 100 units, so whatever that is, that's 5,000. I'm going to get another giggle here, right. Imagine our little units. It's a quality, one unit of quantity--one unit of quality for a year. So, it's a quality year unit, whatever it is, 5,000 units. Here 6,500. You might say, look, we can capture the thought that the duration of your life matters, the quality of your life matters, by multiplying the two together. And without getting hung up on the numbers, as though there was any kind of precision here, the underlying thought's fairly clear. The area of the box represents the overall quality--the overall quantity that you managed to cram into your life in your 50 years. And we could start measuring different kinds of lives. We might start worrying about well, look, suppose I could live 50 years at 130 or I could live, whatever it is, 100 years at some other number that's a little bit less. We might say, oh, less quality, but longer quantity, longer duration, more valuable life filled in that last box. We see how it goes. But the question we need to ask is--so, if we've got this more rich sense of quantity, where we multiply the duration of the life times the how good a life you're having while you've got it, does that give adequate place to what we think is valuable? Does that give adequate place to quality in life? Let me draw some different boxes, some different possible lives to choose between. Suppose you had a nice long life, 150 years. Again, just for the sake of concreteness, we assign 50 quality points. So, the area is 7,500. Let's suppose, so you can get a feel for this, let's suppose that the best life lived on earth so far was worth a 10. So this is an incredible life, to be a 50. And you get it for 150 years. A very nice life. Now, compare it with this life. Suppose that this life isn't really all that good in terms of how well off you are at any given time. It's plus one. Zero would be a life not worth having, though no worse than nonexistence. Negative numbers would be lives presumably that would be you're better off dead. This life is just barely worth having. It's plus one. But it's a very, very, very long life, so long that I couldn't draw it to scale. That's why we've got the "..." in the middle. Suppose it goes on for 30,000 years. Well, the math here is pretty easy. 30,000 times one is 30,000 in terms of the area. Okay. So, trying to choose between these two lives. Life A or Life B? In terms of quantity, our enriched notion of quantity, where you measure the length of the life times the height of the box, Life B's got more quantity of what matters--30,000 versus 7,500. And yet, most of us, when we think about this choice, do not find B to be a preferable life, even though the quantity of value--Just suppose we could measure quantity of whatever the goods are that we've got crammed into our life. Well, this has very, very, very small amounts stretched over a very long time. The quantity's larger, but Life A seems preferable. Now, for any--at least, this may not be true for everybody, but for those of us who share that thought, you might say quantity isn't all it's about. Or when we try to take quality into account, it wasn't so much that we couldn't measure it, it's that if you reduce the importance of quality into, fold it into quantity, so that what it's all about is the total amount that you're getting, well the total amount's bigger in B than A. If you don't think B's a better life, that suggests that totals aren't what it's all about. Well, what else might we then choose between with regard to A and B? Well, the natural response is to say, even though Life A is shorter, it attains a kind of peak, a kind of height that isn't approached anyplace in Life B. And perhaps, then, in evaluating lives and choosing between rival lives, we can't just look for the quantity of good, we have to look at the peaks. We have to look at the heights. In choosing between lives, it's important to think not just about how much did you pack in, total, but what were the greatest goods that you had or accomplished in your life? And perhaps, then, we should conclude, quality can trump quantity. Perhaps with the right quality in place, quantity becomes of secondary importance. Yeah, it might be that if we could have a longer life where we achieved great things, rather than a shorter life where we achieved great things, better to have the longer life. Quantity might matter, too, as long as we think the quality's what matters the most. But a more radical version of the theory would say, actually, quality's all that matters. The peaks are all that matter. That, at any rate, is the position that gets expressed by Hölderlin in the poem "To the Parcae," to the fates. That was in one of the essays that I had you read. But let me read that now. "To the Parcae." A single summer grant me, great powers, and a single autumn for fully ripened song that, sated with the sweetness of my playing, my heart may more willingly die. The soul that, living, did not attain its divine right cannot repose in the nether world. But once what I am bent on, what is holy, my poetry is accomplished: Be welcome then, stillness of the shadows' world! I shall be satisfied though my lyre will not accompany me down there. Once I lived like the gods, and more is not needed [Kaufmann 1976]. Hölderlin is saying he doesn't care about quantity at all. If he can accomplish something great, if he can ascend to the heights and do something great with his poetry, that's enough. Once he's lived like the gods, more is not needed. So, in thinking about what we want to do with our lives, it's not enough to have the kind of theory that we've begun to sketch in previous weeks, where we think about what are the various things worth having in a life?--we also have to address this question of quality versus quantity. Is quality only important insofar as it gets folded into producing greater quantity? Or does quality matter in its own right as something that's worth going for, even when it means a smaller quantity? And if quality does matter, does quantity matter as well? Or is, indeed, quality all that matters? Is Hölderlin right when he says once I've lived like the gods, more is not needed? Now, Hölderlin, I imagine, in thinking about why that kind of life is the best kind of life he could aspire to, is thinking, in part, about the lasting contribution that his poetry makes. There's a sense in which, when we think about having done things like that, we feel that we attain a kind of immortality. We live on through our works. And so the next question I want to turn to in thinking about strategies of how to live in light of the fact of, in terms of facing our mortality is, well, maybe a kind of immortality is worth going after. Or maybe, at the very least, we can take a kind of comfort in thinking that we have or can attain a kind of immortality. I emphasize the word "kind," of course, because strictly speaking, if you live on through your works, it's not as though you are literally living on. It's semi-immortality or quasi-immortality. I suppose people who don't believe in it would prefer to call it pseudo-immortality. Actually, this reminds me of a joke. Here's a Woody Allen joke. "I don't want to be immortal through my work; I want to be immortal through not dying." Well, as you know, previously I've argued that genuine immortality, unending life, would not be a good thing. But still, many of us aspire to this kind of semi-immortality. And actually, it can take, I think again, two broad forms. Sometimes people want to say there's a sense in which, although it's not as though you're literally living on, there's something like that going on, insofar as a part of you continues. If I have children, then literally some of my--in my case, there's a male--one of my cells continues. And then their cells continue in their children and their cells continue in their children. If you think of an amoeba splitting and splitting, and splitting and splitting again, part of the original amoeba could be there for many, many, many generations. Some people take comfort in the thought that, literally speaking, a part of them will continue, if not through cells through my offspring, perhaps at least my atoms get recycled, get used again. And so I get absorbed into the universe, but I never disappear. Some people take comfort in that thought. The German philosopher Schopenhauer thought that this should reduce somewhat the sting of death. He said, "But it will be asked, ‘How is the permanence of mere dust, of crude matter, to be regarded as a continuance of our true inner nature?'" And he answers, Oh! Do you know this dust then? Do you know what it is and what it can do? Learn to know it before you despise it. This matter, now lying there as dust and ashes, will soon form into crystals when dissolved in water. It will shine as metal; it will then emit electric sparks… It will, indeed, of its own accord, form itself into plant and animal; and from its mysterious womb it will develop that life, about the loss of which you in your narrowness of mind are so nervous and anxious. Well, that's a very moving passage, but I have to say, I don't buy it. I don't find any comfort at all in the thought that my atoms will still be around getting reused into something else. So, this first kind of semi-immortality, where you take comfort in the thought that literally there are parts of you that will continue, this strikes me as a kind of desperate striving, desperate reaching for straws. Perhaps in Schopenhauer's case, leading him to delude himself into thinking, "Oh, it's not so bad that I'm going to die and going to die soon. At least my atoms will still be around." It doesn't work for me. There's a second sort of approach, though, where it's not so much that you're supposed to be comforted by the thought that your parts will continue to last after you, but that your accomplishments will continue to last after you. Hölderlin writes poetry, which we're still reading some 200 years later. You can write a novel which can be read for 20 or 50 or 100 or more years. You might make some contribution to math or philosophy or science, and 50 or 100 years later, people could still be talking about that philosophical argument or that mathematical result. You might have other kinds of accomplishments. You might build a building that will last after you. Stone cutters, I've read interviews with stone cutters who take a kind of pride and comfort in the thought that long after they're gone, the buildings that they helped build will still be there. You might try to build a company that will last after you die. Or, for that matter, you might take pleasure and comfort in the accomplishment of having raised a family. Here, not so much the thought that some of your cells are in your offspring, but rather the thought that to have raised another decent human being is a nontrivial accomplishment, something worth having done with your life. And that accomplishment continues after you're gone. Well, what should we think about this second group of approaches to attaining semi-immortality? I've got to say that I'm of two minds when I think about them. Unlike the dust and the atoms stuff, where I just think you're deluding yourself, I find myself drawn to this second set of thoughts. I find myself tempted by the thought that there's something worth doing about producing something that continues for a while. That it's significant. And even if my life here on earth is a short one, if something that I've accomplished continues, my life is the better for it. That's Hölderlin's thought, I suppose. And it's a view that appeals to me. I suppose it explains, in part, why I write philosophy, in the hopes that the things I write might still be read 20 years after I die, or 50 years or, if I'm so lucky, 100 years after I die. Well, in certain moods, perhaps in most moods, I'm drawn by that thought. But in other moods, I've got to confess, I'm skeptical of it. I remind myself of Schopenhauer writing his little passage, his Ode To Dust, and I find myself saying, just like Schopenhauer was so desperate that he deludes himself into thinking, "Oh, it doesn't matter that I'm about to turn into dust. Dust is really, really important," I'm just deluding myself as well, when I think there's something grander, something significant, something valuable about having made an accomplishment, having achieved something that continues beyond me. So in certain moods, at least, I find myself thinking that I've just deluded myself. But that's only certain moods. And at least most of the time, I find myself in agreement with Hölderlin. Not necessarily in thinking quantity doesn't matter at all. To have written one great work is all you need and more great works doesn't add anything--that strikes me as going too far. But at least to have done something significant that abides, that does seem to me to add to the value and significance of my life. Well, let me mention an entire different approach. I'm going to give very, very short shrift to this last approach, but it's probably worth mentioning as well. The entire assumption of all the lines of thought that I've been discussing so far today have in common the underlying belief that the way to deal with the fact that we live and then we're dead is to try to make the life that you've got as good as possible, as valuable as possible, to pack as much into it as you can, even though there's room for disagreement about what's the best strategy for doing that. The picture is one in which we say we can't do anything about the loss of life, so the right response is to make the life that we've got as valuable as it can be, to see it as valuable as it can be. But there's a rather different approach. That alternative approach says, yes, we're going to lose life and that's horrible. But it's only horrible insofar as you think of life as something that it's bad to lose. After all, if we were to decide that life wasn't really a valuable gift, if it wasn't really something worth embracing, and something that we could turn into something full of value, then its loss wouldn't actually be a loss. That's a point we've seen before, right? The central badness of death is explained in the depravation account. You are deprived of the fact that you could have had more life that would have been worth having overall. But if life isn't worth having overall, then its loss is not a bad thing, but a good thing. The trick, then, isn't to make life as valuable as it could be, but rather to come to recognize that on balance, life isn't positive, but negative. I know that what I'm about to say has a kind of Classics Illustrated simplicity to it, and it's a bit of an over-exaggeration, but in gross terms, we might say the first general outlook--that life is good and so the loss of it is bad and so the answer is make as much of it as we can while we've got it--you might say that is, in broad strokes, the western outlook. And in broad strokes, the notion that life isn't really as good as we take it to be, but is, in fact, bad overall, perhaps it's oversimplification to call it the eastern outlook, but at least it's an outlook that gets more expression typically in eastern thought than in western thought. Foremost example of this second outlook is, I suppose, Buddhism. Four noble truths in Buddhism. The first noble truth is that life is suffering. Buddhists believe if you think hard about the underlying nature of life, you'll see that everyplace there is loss. There is suffering. There is disease. There is death. There is pain. Sure, there are things that we want and, if we're lucky, we get them. But then we lose them and that just adds to the suffering and the pain and the misery. On balance, life isn't good. First noble truth, life is suffering. And so, armed with this estimation, what Buddhists try to do is to free you from attachment to these goods, so that when you lose them, the loss is minimized. And indeed, Buddhists try to free you from what they take to be the illusion of there being a self. There is no me to lose anything. Death is terrifying insofar as I worry about it being the dissolution of myself. If there is no self, there's nothing to dissolve. It all makes sense--and I have tremendous respect for Buddhism--it all makes sense, given the thought that life is suffering. But for better or for worse, I'm a child of the west. I'm a child of the Book of Genesis, where God looks on the world and says, "It's good." For me, at least, the strategy of minimize your loss by viewing the world as negative, is not one that I can be at rest with. For me, life can be good. And so the choices for me, and I suppose for most of us, remain among the strategies with which I began. How is it that we can most make our lives valuable? What is it that we can do that will allow us, with Hölderlin, to say, "once we lived like the gods"? |
YaleCourses_Philosophy_of_Death | 18_The_badness_of_death_Part_III_Immortality_Part_I.txt | Professor Shelly Kagan: Last time I sketched the deprivation account. That's a story or theory about what it is about death that makes it bad. What's bad about death is the fact that, because you're dead, because you don't exist, you're deprived of the good things in life. Being dead isn't intrinsically bad. It's not like it's an unpleasant experience. But it's comparatively bad. You're worse off by virtue of the fact that you're not getting the things that you would get, were you still alive. If I'm dead I can't spend time with my loved ones. I can't look at sunsets. I can't listen to music. I can't discuss philosophy. The deprivation account says, what's bad about death is the fact that you're deprived of the good things in life. Now, that seems pretty plausible, as a basic story goes. But as we also saw last time, there are some philosophical puzzles about how it could be. The question of when is death bad for you, and even more importantly and more essentially, there's the difficulty of asking ourselves, do we really believe it's possible for something to be bad for you, when you don't even exist? We saw a series of difficult choices. If we don't throw in an existence requirement, if we don't say--to put it more positively, if we say things can be bad for you even if you don't exist at all, then we're forced to say that things are bad for Larry. You'll recall that Larry was our name for a potential person, somebody who could have come into existence but never actually does or will come into existence. Well, talk about people who are deprived of the good things in life, Larry's completely deprived of the good things in life. If we think it doesn't matter whether or not you exist, for things to be bad for you, then we have to say, "Oh, things are bad for Larry." And not just Larry, but all of the 1.5 million, billion, billion, billion never-to-be-born people. The number of potential people is just staggering. And if we throw away an existence requirement, we have to say it's a moral tragedy of unspeakable proportions that these people are never born, that they never come into existence. Now, there are philosophers who are prepared to say that. But if you're not prepared to say that, it looks as though you've got to accept some kind of existence requirement. Why don't we feel sorry for Larry and his billions upon billions of never-to-be-born compatriots? Because, indeed, they don't exist. They're merely possible. And we might say, you've got to exist in order for something to be bad for you. But once we say that, it seems we're running towards the position that, in that case, death can't be bad for me, because of course, when I'm dead, I don't exist. So how can anything be bad for me? I proposed at the end of class last time that we could try to solve this problem by distinguishing between two versions of the existence requirement--a more modest version and the bolder version. The bolder version says, "In order for something to be bad for you, you've got to exist at the very time that it's happening." If we say that, then indeed, we can say, "It's not bad that Larry doesn't exist, because he doesn't exist now." So there's nothing--even if we wanted to think that there are good things he could be having, that's not bad for him to not have them. He doesn't exist now. But it also, if we go all the way to the bold existence requirement, we have to say, "Look, when I'm dead that won't be bad for me, because, well, I won't exist then." But instead of accepting the bold existence requirement, we might settle for something a little bit less demanding, the thing I dub "the modest existence requirement." In order for something to be bad for you, there has to have been a time, some time or the other, when you exist. You've got to, as it were, exist at least briefly in order to get into the club, as we might put it, of those creatures, those possible creatures that we care about and are concerned about morally. You have to have gotten in the club by at least having existed for some period of time. But once you're in the club, things can be bad for you, even if you don't happen to exist at that particular moment. If we accept the modest existence requirement, then we can say, it's not bad that Larry doesn't exist, because, well, Larry doesn't get into the club. In order to get into the club of things that we feel sorry for, you have to have existed at least some moment or the other. Larry and the billions upon billions upon billions of potential people who never actually come into existence, they don't satisfy the requirement of having existed at some time or the other. So we don't have to feel sorry for them. But we can feel sorry for somebody who died last week at the age of 10 because we can say, well, they existed, albeit very briefly. And so they're in the club of beings that we can feel sorry for and say, look, it's bad for them that they're not still alive. Think of all the good things in life they would be getting if they were still alive. So the modest existence requirement allows us to avoid both extremes. Maybe then that's the position that we should accept. It may be, on balance, the best possible view here. But I just want to emphasize that even the modest existence requirement is not without its counterintuitive implications. Consider somebody's life. Suppose that somebody's got a nice long life. Comes into existence, leads--lives 10,20, 30,40, 50,60, 70,80, 90 years. Nice life. Now, imagine that we bring it about that instead of living 90 years, they have a somewhat shorter life--10,20, 30,40, 50 years. We've caused them to die after 50 years as opposed to the 90 years they might have otherwise had. Well, we can say, look, that's worse for them--to live merely 50 years instead of the full 90 or 100 years. And if we accept the modest existence requirement, we can say that, because after all, whether you live 50 years or 90 years, you did exist at some time or the other. So the fact that you lost the 40 years you otherwise would have gotten, well, that's bad for you. There. Fair enough. That gives us the answer we want. That's not counterintuitive. Now, imagine that instead of living 50 years, the person lives only 10,20 years and then dies. Well, that's worse still. Think of all the extra goods they would have gotten if only they hadn't died then. And if I caused them to die after 20 years instead of 50 or 90 years, I've made things worse and worse. Imagine that I caused them to die after one year. Worse still. All this is perfectly intuitive. The shorter their life, the worse it is for them, the more they're deprived of the good things in life. So 90-year life, not bad. 50-year life, worse. 10-year life, worse still. One-year life, worse still. One-month life, worse still. One day life, worse still. One-minute life, worse still. One-second life, worse still. Now, imagine that I bring it about that the person never comes into existence at all. Oh, that's fine. See? That's the implication of accepting the modest existence requirement. If I shortened the life they would have had so completely that they never get born at all, or they never come into existence at all, then they don't satisfy the requirement of having existed at some time or the other. So although we were making things worse and worse and worse and worse and worse as we shortened the life, when we finally snip out that last little fraction of a second, it turns out we didn't make things worse at all. Now we haven't done anything objectionable. That's, it seems, what you've got to say if you accept the modest existence requirement. Of course, if we didn't have an existence requirement at all, we could say, "Well look, worst of all, never to have been born at all." Fair enough. But if you say that, then you've got to feel sorry for Larry. You've got to feel sorry for the 1.5 million billion, billion, billions. So which view is it that on balance is the--I don't want to say "most plausible." I think when we start thinking about these puzzles, every alternative seems unattractive in its own way. Maybe the most we could hope for is, which is the least implausible thing to say here? I'm not altogether certain. Let me turn to one more trouble or problem or puzzle for the deprivation account. And this particular puzzle arises whether or not we accept an existence requirement, whether or not we accept a bold existence requirement, a modest existence requirement, or no existence requirement, because we're going to deal with somebody who actually does exist at some time or the other, namely you or me. This is actually a puzzle that some of you may have written your paper on, because it's the puzzle about Lucretius, the puzzle that Lucretius gives us. It's not a direct quote, but Lucretius basically says, look, most of us are upset and anxious at the fact that we're going to die. We think death is bad for us. There'll be this period after my death in which I won't exist. And the deprivation account helps say why that's bad, because during this period of nonexistence, you're not enjoying the good things in life. Fair enough, says Lucretius, but wait a minute. The period after you die isn't the only period during which you don't exist. It's not the only period in which if only you were still alive, you could still be enjoying the good things in life. There's another period of nonexistence. It's the period before my birth. I think I've just switched the timeline here, but all right. Imagine this is the period before my birth. Just like there'll be an infinite period after my death in which I won't exist, and realizing that fills us with dismay, there was, of course, an infinite period before I came into existence. Well, if nonexistence is so bad and by the deprivation account it seems that we want to say that it is, shouldn't I be upset at the fact that there was this eternity before I was born? But, says Lucretius, that's silly, right? Nobody's upset about the fact that there was an eternity before they were born. In which case, it doesn't make any sense to be upset about the eternity after you die of nonexistence. Well, Lucretius doesn't offer this as a puzzle. Lucretius offers this as an argument that we should not be concerned about the fact that we're going to die. Most philosophers aren't willing to go with Lucretius all the way to the end of the bus, bus route. Most philosophers want to say there's got to be something wrong with that argument someplace. There's got to be some-- Well, what are the possibilities here? One possibility is indeed to just agree with him, right? Nothing bad about the eternity before I was born. So, nothing bad--of the eternity of nonexistence. So nothing bad about the eternity of nonexistence after I die. That's one possibility, to agree with Lucretius. Second possibility is to say, look, Lucretius, you're right. We really do need to treat these two eternities of nonexistence on a par. But we could turn it around. Instead of saying with Lucretius, nothing bad about this one, so nothing bad about this one, maybe we should say instead, something bad about the one after we die, and so something bad about the one before we were born. Maybe we should just stick to the deprivation account and not lose faith in it. The deprivation account says it's bad that there's this period after we die, because if only we weren't dead then, we would still be able to enjoy the good things in life. Maybe we should say, look, similarly then, when the deprivation account tells us it's bad that there's this period before we come into existence, when we don't exist. Because if only we had existed then, we'd be able to enjoy the good things in life. Maybe Lucretius was right, we have to treat both periods the same. But he's wrong in thinking we shouldn't think either period is bad. Maybe we should think both periods are bad. Well, that's a possibility. What other possibilities are there? Another possibility is to say, Lucretius, you're right, there are two periods of nonexistence, but there's a justification for treating them differently. They're asymmetrical in a way that makes sense from the point of view of what we should care about. Well, it's easy to say that. The puzzle--most philosophers want to take that last way out. They want to say there's something that explains why it makes sense, why it's reasonable, to care about the eternity of nonexistence after my death, but where that doesn't apply to the eternity of nonexistence before my birth. And then the puzzle is to point to a difference that would justify that kind of rationally asymmetrical treatment of the two periods. It's easy to say it's okay, it's reasonable to treat them differently. The philosophical challenge is to point to something that explains or justifies that. Now, a very common response is to say something like this. Look, consider the period after my death. I'm no longer alive. I have lost my life. In contrast, the period before my birth, although I'm not alive, I have not lost my life. I have never yet been alive. And so, of course, you can't lose something you've never yet had. So what's worse, this answer suggests, about the period after death, is the fact that death involves loss, whereas prenatal nonexistence does not involve loss. And so, the conclusion comes, and now we see why it's okay to care more about that one than this one, the one after death and the one before birth. Because the one after death involves loss, and the one before birth does not. Very, very common response, but I'm inclined to think that can't be an adequate answer. It's true, of course, that this period involves loss, because the very definition of "loss" is, you don't have something that at an earlier time you did have. So this period involves loss. But the period before birth does not involve loss, because although I don't have life, I haven't, previous to the period, had life, so I haven't lost anything. Of course, there's another thing that's true about this prenatal period, to wit, I don't have life and I'm going to get it. So I don't yet have something that's going to come in the future. That's not true about the post-life period. I've lost life. But it's not true of this period that I don't have life and I'm going to get it in the future. So this period involves loss. Interesting. In fact, we don't have a name for this other state, where you don't yet have something that you'll get later, but you don't yet have it. Let's call that, not loss, let's call it "schmoss," okay? So during this period, there's a loss of life, but no schmoss of life. And in this period, there's no loss of life, but there's a schmoss of life. And now we need to ask, as philosophers, why do we care more about loss of life than schmoss of life? What is it about the fact that we don't have something that we used to, that makes it worse than not having something that we're going to? It's easy to overlook the symmetry here, because we've got this nice word "loss," and we don't have this word "schmoss." But that's not really explaining anything, it's just pointing to the thing that needs explaining. Why do we care more about not having what once upon a time we did, than we care about not having what once upon a time we will? Well, there's some other proposals that we might make. A couple of them have actually been sketched in some of your reading. So for example, Tom Nagel, in his essay on death says, look, here's the difference. It's easy enough to imagine--and indeed for there to actually be a possibility of--my living longer. Suppose I die at the age of 80 and if I didn't die, then I'd continue living 90,100, what have you. There it is. It's still me. When you imagine me with an earlier--rather with living longer, you're imagining me living longer. To use the vocabulary that we introduced in thinking about some of Plato's arguments, we might say although --suppose I die at age 80--that's a fact about me, it's a contingent fact about me. It's not a necessary fact about me that I died at 80. Suppose at 80 I get hit by a car. It's not a necessary truth about me that I got hit by a car. I could have not gotten hit by a car, and lived to the ripe old age of 90 or 100. When you die is not an essential feature of you, so it's easy for us to think about the possibility in which I live longer. But, says Nagel, when I try to imagine what would the alternative be, if I'm going to be upset about the prenatal nonexistence, we have to imagine my being born earlier. I was born in 1954. Should I be upset about the fact that I was born in 1954 instead of 1944? That's the analog of being upset about the fact that I die in whatever it is, 2044 instead of living to 2054. Nagel says, but look, when you try to think about the possibility in which instead of being born in 1954, I was born in 1944--and for the rest of you, you've got to plug in your own birthdates--Nagel says you can't do it. The date of my death is a contingent fact about me. But the date of my birth is not a contingent fact about me. And by birth we don't really mean when I came out of the womb. That could be changed, perhaps by having been delivered prematurely, or through Caesarean, or what have you. We really mean the time at which I come into existence. Let's suppose it's the time when the egg and the sperm join. That's not a contingent moment in my story. That's an essential moment in my life story. How could that be? We say, couldn't my parents have had sex earlier, 10 years earlier? Sure they could have. But remember, if they had had sex 10 years earlier, it would have been a different egg and a different sperm coming together, so it wouldn't be me. It would be some sibling of mine that, as it happens, never got born. But if, had they had sex 10 years earlier, some sibling would have been born. That's not me being born earlier. Different sperm, different egg makes for a different person. So you can't--you can say the words, "if only I'd been born earlier," but it's not actually metaphysically possible. Well, it's an intriguing suggestion, but I think it can't quite be right, or at the very least, it cannot be the complete story about how to answer Lucretius' puzzle. Suppose we've got a fertility clinic that has some sperm on hold, and has some eggs on hold, in the sperm bank, in the egg bank, what have you. And they keep them here frozen until they're ready to use them. And they thaw them out in whatever it is, 2020. And then the person's born. Of course, he could go back. He could look back and say, if only they had put my sperm and egg together 10 years earlier. That would still be me. After all, very same sperm, very same egg, makes for the very same person. So if only they had combined my sperm and egg 10 years earlier, I would have been born 10 years earlier. Well, so Nagel's wrong in saying it's not possible to imagine being born earlier. In at least some cases, it is. Yet, if we imagine somebody like this, somebody who's an offspring of this kind of fertility clinic, and we ask, would they be upset that they weren't born earlier? again, it still seems as though most people would say, "No, of course not." So the Nagel answer doesn't seem to me to be an adequate one. Well, there's another possible answer. This is Fred Feldman's answer, also in the one of the papers that you've read. Fred Feldman says--Nagel's a contemporary philosopher, Fred Feldman's a contemporary philosopher. Feldman says, when I imagine--suppose I get killed by the bus in 2044, and if I imagine if only I hadn't died then, what is it that we imagine? We imagine instead of living 80 years, living 90 or 95 or more. We imagine a longer life. But what is it that happens when I say, if only I'd been born earlier? Well, says Feldman, you don't actually imagine a longer life, you just shift the entire life and start it earlier. After all, suppose we just said--especially if I had asked you this question before setting all of this up--but if only you'd been born in 1800. Nobody thinks, "Oh, if only I'd been born in 1800, I'd still be alive. I'd be 200 years old." You think, "Oh, if I'd been born in 1800, I would have died in 1860,1870, 1880," whatever it is. When we imagine being born earlier, we don't imagine a longer life. Nothing better about having a life earlier, according to the deprivation account. But when we imagine not dying when we actually die, we say, "If only I died in 2050 instead of 2040," it's not that we imagine having been born later. We don't shift the life forward. We imagine a longer life. So, Feldman says, no wonder, no surprise that you care about the nonexistence after death. Because when you imagine that being different, you imagine a longer life. But when you start thinking about the nonexistence before birth and you imagine that being different, you don't imagine more goods in life, you just imagine them taking place at a different time. Well, that's an interesting possibility, I suppose. It doesn't seem to me--again, that it's got to be--maybe it's part of the story, but it doesn't seem like it's going to be the complete story. Because we could imagine cases where the person just thinks, look, if only I'd been born earlier, I would have had a longer life. Let's suppose that next week astronomers discover the horrible fact that there's an asteroid that's about to land on the Earth and wipe out all life. So here it is, it's going to come on January 1,2008. And there you are, at whatever your age is, 20 years old, 21 years old, on January-on December 31,2008 thinking, I've only had 20 years of life. If only I'd been born earlier. If only, instead of being born, whenever it was, I'd been born 10 years earlier, I would have had 30 years of life instead of 20 years of life. That seems perfectly intelligible. So it does seem as though, if we put our head into it, we can get ourselves into thought experiments where we say, yeah, don't just shift the life, make it longer. But instead of making it longer in the post-death direction, make it longer in the pre-birth direction. Again, you can imagine somebody saying, "Yeah, and when we do that, we should feel the same." It doesn't really matter which direction it goes. So symmetry is the right answer after all. When I think about the asteroid example, I find myself thinking, huh, maybe symmetry is the right way to go here. Maybe Feldman's right, that normally we just shift instead of extending. But if I'm careful to extend, maybe that really is bad that I didn't get started sooner and have a longer life in that direction. Well, here's one other answer that's been proposed. This is by yet another contemporary philosopher, Derek Parfit. Parfit says, it's true that when I think about the nonexistence after I die, that's loss, whereas the nonexistence before I'm born, that's not loss, that's mere schmoss. And it's true that we need an explanation about why loss is worse than schmoss. But we can see that this is not an arbitrary preference on our part, because in fact, it's part of a quite general pattern we have of caring about the future in a way that we don't care about the past. This is a very deep fact about human caring. We are oriented towards the future and concerned about what happens in it, in a way that we're not oriented and concerned about what happened in the past. Parfit's got a very nice example to bring the point home. He says, imagine that you've got some condition, some medical condition that will kill you unless you have an operation. So fair enough, you're going to have the operation. This will allow you to live your life. Unfortunately, in order to perform the operation, they can't have you anesthetized. You have to be awake, perhaps in order to tell the surgeon "Yeah, that's where it hurts," whatever it is. Sort of like when the dentist pokes and says, "Does this hurt? Does that hurt?" So you've got to be awake during the operation and it's a very painful operation. We can't give you pain killer, because then you won't be able to point out, does this hurt, does that hurt, and so forth and so on. Since we can't give you pain killer, all we can do is this. So, you'll be awake during this, basically being tortured. You'll be awake being tortured. It's still worth doing it, because this will cure the condition, so then you'll have a nice long life. Since we can't give you pain killers and we can't put you out, all we're going to do is, what we will do is this: After the operation is over, we'll give you this very powerful medication, which will give you short-term, sort of very localized, amnesia. You won't remember anything about the operation itself. So you won't have to at least to dwell upon these horrible memories of having been tortured. Those will be completely wiped out. Okay, so painful operation. You're awake during it. After the operation, you're given this thing that makes you forget whether you've had the operation, anything about the operation at all. And that the preceding 24 hours will be completely wiped out. So you're in the hospital and you wake up and you ask yourself, "Huh, have I had the operation yet or not?" Don't know, right? Because of course, if I haven't had it, no wonder I don't remember it, but if I have had it, I would have been given that temporary sort of localized amnesia. So of course I wouldn't know whether or not I've had it. So you ask the nurse, "Have I had the operation yet or not?" She says, "I don't know, we have a couple on the hall today who are, some of whom have had it and some of whom are scheduled to have it later today. I don't remember which one you are. Let me go look at your file. I'll come back and I'll tell you." So she wanders off. She's going to come back in a minute or two. And as you're waiting for her to come back, you ask yourself, what do you want the answer to be? Are you indifferent, or do you care whether you're one of the people who's already had it, or somebody who hasn't yet had it? Now, if you're like Parfit, and for that matter, like me, then you're going to say, of course I care. I want it to be the case that I'm one of the people who's already had the operation. I don't want to be one of the people who hasn't yet had the operation. You might say, how can that make any sense? Your life's going to have the operation sooner or later. At some point in your life history, that operation is going to have occurred. And so there's the same amount of pain and torture, regardless of whether you're one of the people that had it yesterday or one of the people that's going to have it tomorrow. But for all that, says Parfit, the fact of the matter is perfectly plain, that we do care. We want the pain to be in the past. We don't want the pain to be in the future. We care more about what's happening in the future than we care about what's happening in the past. That being the case, no surprise we care about the nonexistence in the future in a way we don't care about the nonexistence in the past. Well, that may be right as far as explanation goes, but we might still wonder whether or not it's any kind of justification. The fact that we've got this deep-seated asymmetrical attitude towards time doesn't in any way, as far as I can see, yet tell us whether or not that's a justified attitude. Maybe evolution built us to care about the future in a way that we don't care about the past and this shows up in lots of places, including Parfit's hospital case, including our attitude towards loss versus schmoss, and so forth and so on. But the fact that we've got this attitude doesn't yet show that it's a rational attitude. How could we show that it's a rational attitude? Well, maybe we'd have to start doing some heavy-duty metaphysics, if what we've been doing so far isn't yet heavy-duty enough. Maybe we need to talk about the difference between--the metaphysical difference between the past and the future. The past is fixed, the future is open, the direction of time. Maybe somehow we could bring all these things in and explain why our attitudes towards time make sense. I'm not going to go there. All I want to say is it's not altogether obvious what the best answer to Lucretius' puzzle is. So when I say, as I have said--and I'm going to say it many times over the course of the remaining weeks--that the central thing that's bad about death is the fact that you're deprived of the good things in life, when I make use of the deprivation account, I don't mean to suggest everything is sweetness and light with regard to the deprivation account. I think there are some residual puzzles about how it could be that death is bad. And in particular, how it could be that the deprivation account puts its finger on what's bad about death. But for all that, it seems to me the right way to go. It seems to me that the deprivation account does put its finger on the central bad thing about death. Most centrally, what's bad about death is that when you're dead, you're not experiencing the good things in life. Death is bad for you because you don't have what life would bring you, if only you hadn't died. All right. If that's right, should we conclude, in fact, do we have to conclude--if death is bad because of it's a deprivation, then if I wasn't dead, I wouldn't be deprived--so doesn't it follow then that the best thing of all is never to die at all, to wit, immortality? If it's bad--suppose I get hit by a truck next week, that's bad, because if only I hadn't gotten hit by a truck, I might have lived another 20,30 years, whatever. I would have gotten the good things in life that would have been better for me. Ah, but when I die of whatever it is, some heart disease at age 80, that's bad maybe because if only I didn't have heart disease, I could have lived another 10,15, 20 years, gotten more good things in life. If only I hadn't died at 100, I would have gotten more good things in life. If only I hadn't died at 500, I would have gotten more good things in life. Whenever it is I die, won't it always be true, if we accept the deprivation account, that if only I hadn't died then, I would have gotten more good things in life? And so whenever it is you die, death is bad for you. So the best thing for you would be never to die, immortality. Two questions really that we need to ask. One is: Does consistency, does logic, require somebody who accepts the deprivation account--does consistency require that if you accept the deprivation account, you believe immortality's a good thing? Second question: Even if logic doesn't require that, is it true that immortality's a good thing? Let me start with the first one, because I think that's the easier one. Logic alone, logic plus the consistency requirement--rather and the deprivation account--logic alone doesn't require us to say immortality's a good thing. Why? Because strictly speaking, what the deprivation account says is, death is bad insofar as you're deprived of the good things in life by virtue of not existing. If only you hadn't gotten hit by that truck, you would have gone on to an exciting life in your career as a professional dancer. You would have had a family, or what have you. Whatever it is. You would have traveled around the world. Life would have given you a lot of great things and you get deprived of those great things, that's why it's bad that you got hit by the truck. That is to say, death is bad, when it's bad, by virtue of the fact that it deprives you of the good things in life. But suppose--we don't yet know whether this could actually happen, but here we're just talking about logical possibilities--suppose that there's no more good things for life to give you. Then when you're deprived of life by death, you're not being deprived of any good things, and so it's not bad for you to be dead at that point. Death is only bad, according to the deprivation account, when there are good things that would have come your way. When, as we might put it, on balance, the life you would have had would have continued to be good for you. When that happens, then to lose that good bit of life, that's bad for you. But if it should turn out that what life would have had hereafter, instead of being good, would have been hellish, it's not bad for you to avoid that. It might actually be good for you to avoid it. So, even if we accept the deprivation account, we're not committed to the claim that death is always bad. We have to look and see, what would life actually hold out for us? Logic alone, plus the deprivation account, doesn't force us to say immortality would be a good thing. After all--this is really a crucial point to understand--things that are good for you in limited quantities can become bad for you if you get more and more and more and more of them. Well, I love chocolate. So suppose somebody comes up to me with a box of Godiva chocolate, offers me a couple of chocolates. I say, "Wonderful! I love Godiva chocolate." And then they give me some more and some more. Twenty pieces of chocolate. Well, you know, by the time I got 20 pieces of chocolate, I'm not sure right now if I really want the 21^(st) piece. But you keep giving me some more. Thirty pieces of chocolate, 40 pieces of chocolate, 100 pieces of chocolate. At some point--I've never actually had his much chocolate; I don't know what the point is, but at some point--I'm going to say, you know, although the first 10,20, 30 pieces of chocolate, those were good, but giving me the 21^(st) piece of chocolate or the 50^(th) piece of chocolate, no longer good. Logically, at least, it could happen. Logically, it could happen that although in quantities, small quantities, 50 years, 60 years, 100 years, life is good, at some point, maybe life would turn bad for us. Just like being force-fed more and more chocolate. And if it did turn bad for us, the deprivation account would allow us to say, oh, at that point, dying's not bad for you. Well, that's all that logic tells us. Logic simply tells us we don't have to believe immortality's a good thing. But for all that, it could still be a good thing. So that's question number two. Let's ask, what should we think about the prospect of living forever? Would it, in fact, be better and better and better? Somebody dies at age 10 by some horrible disease, better if they'd made it up to 40. Somebody dies at age 40, better if they'd made it up to 80. Somebody dies at age 80, better if they'd made it to 100,120. Is it true that life would get better and better and better, the longer it is? Now, in asking this question, we have to be careful to be clear about what exactly we're imagining. Here's one way to try to imagine that story. Imagine that life is sort of the way it works now, with the kinds of changes that bodies undergo as they get older. But instead of those changes basically killing you at 80,90, or 100, they don't. You get more and more of those changes, but they never actually kill you. This is the sort of thought experiment that Jonathan Swift undertakes in the passage from Gulliver's Travels that I've had you look at. He imagines Gulliver coming to a country where a subset of the people live forever, immortals. And at first, Gulliver says, "Oh, isn't this wonderful?" But he forgot to think about the fact that if the kinds of changes that we undergo continue to accumulate, then you're getting older. Not just older, but weaker, in more and more discomfort, senility sets in with a vengeance, until eventually you've got these creatures that live forever, but their mind is gone, and they're sort of in pain and they can't do anything because their body's utterly infirm and diseased and sick. That's not a wonderful thing to have. If immortality was like that, says Swift, that would be horrible. For an immortality like that, death would be a blessing. And Montaigne, in the essay that I've had you look at, says indeed, death is a blessing, because it puts an end to the pain and suffering and misery that afflict us in our old age. Well, all of that seems right, but I suppose we'd be forgiven for thinking, look, when we wanted to be immortal, we didn't want this kind of life going on and on and on with the same trajectory, the same downward trajectory. We sort of wanted to live forever, hail and hearty and healthy. So even if the real world wouldn't allow us that, let's just ask science fictiony whether or not in fact living forever would be good. Isn't it at least true that in principle, living forever could be good? You've got to imagine changing some of the facts about what it would be like to live forever. So instead of asking the question, the question I started with, would it be good to live forever? --If you're not careful, this is going to be like one of those horror stories, right? Where you've got a couple of wishes and you aren't careful about how exactly you state the wish. And so you get what you want, but it ends up being a nightmare, right? If you just tell the fairy who gives you three wishes, "I want to live forever," and you forgot to say "and be sure to keep me healthy," well, that's going to be a nightmare. That's what Swift told us. So let's be careful. Let's throw in health and anything else you want. Throw in enough money to make sure you're not poor for eternity. Wouldn't that be horrible, to be healthy but impoverished forever? Throw in whatever you want. All we need to ask at that this point is, is there any way at all to imagine immortality, where immortality of that sort would be a good thing? Is there any way to imagine existing forever where that would be good for you, forever? Now, it's very tempting at this point to say, look, of course. Nothing could be easier. Just imagine being in heaven forever, right? You're done, right? You've got heavenly bliss. Isn't this incredible? Wouldn't we all love to be in heaven forever? The trouble is, we were a little bit vague about what exactly life is like in heaven. It's a striking fact that even those religions that promise us an eternity in heaven are rather shy on the details. Why? Because, one might worry, if you actually tried to fill in the details, this wonderful, eternal existence ends up not seeming so wonderful after all. So imagine that what's going to happen is that we all become angels and we're going to spend eternity singing psalms. Now, I like psalms and I actually enjoy rather singing psalms at services. On Saturday mornings, I sing psalms in Hebrew and I rather enjoy it. But if you ask me: What about the possibility of an eternity of doing that? That doesn't really seem so desirable. Bedazzled, not the remake. I haven't seen the remake, but the original. In the original, there's a human character who hooks up with the devil. He meets the devil and he asks the devil, "So why did you rebel against God?" The devil says, "Well, I'll show you. You sit here on the--" whatever it was, the mailbox, I think it was. "I'll sit up here on the mailbox," the devil says. "And you dance around me and say, ‘Oh, praise the lord, aren't you wonderful? You're so magnificent. You're so glorious.'" And the human does this for a while and he says, "This has gotten really boring. Can't we switch?" And the devil says, "That's exactly what I said." Now, when you try to imagine heaven singing psalms for eternity, that doesn't seem so attractive. All right, so don't imagine heaven singing psalms for eternity. Just imagine something else. But what? Imagine what? This is the thought experiment that I invite you to participate in. What kind of life can you imagine, such that having that life forever would be good? Not just for another 10 years, not just for another 100 years, not just for another 1,000 years, or million years, or a billion years. Remember, eternity is a very, very long time. Forever goes on forever. Can you describe an existence that you would want to be stuck with forever? Now, it's precisely at this point that Bernard Williams, in another one of the papers I had you take a look at--Bernard Williams says no . No kind of life would be one that would be desirable and attractive forever. No kind of life at all. In short, says Williams, every life would eventually become tedious and worse, excruciatingly painful. Every kind of life is a life you would eventually want to be rid of. Immortality, far from being a wonderful thing, would be a horrible thing. Suppose, for the moment, that we were to agree with Williams. What then should we say? We might say--look, at least when we're being careful, if we agree that immortality would be bad, we can't say then that death, per se, is bad. The very fact that I am going to die turns out not to be a bad thing, because after all, the only alternative to dying is immortality. And if immortality would be a bad thing, then death is not a bad thing. Death is a good thing. We might say, if we accept Williams' thought, the fact of our mortality is good rather than bad, if immortality would eventually be bad. Now of course, crucial to notice that even if we say this, that doesn't mean that when you get hit by a car tomorrow, that that's good. You don't have to say that. You can still say it's a bad thing that I got hit by a car tomorrow, because after all, if I hadn't gotten hit by a car tomorrow, it's not as though I would have been condemned to immortality, I just would have lived another 10 or 20 or 30 years. And those years would have been good ones for me. And maybe even when I die--let's suppose I live to the ripe old age of 100--when I die at 100, I could perhaps still say, it's a bad thing for me that I die at the age of 100. Because if I hadn't died now, I might have lived another 10,20, 30 years and still enjoyed things in life, enjoyed playing with my great grandchildren, whatever it is. To say that immortality is bad is not to say it's a good thing that we die when we do. You can still believe consistency--consistently--that we die too soon. Even if in principle, eventually, sooner or later, death would no longer be bad, it could be that it comes too soon for all of us. Still, the question we want to ask is, is there any way even to imagine an immortal life that would be worth having? In principle, could immortality be a good thing? Or, is Williams right, that no, even in principle, go as fantastic and science fictiony as you want, in principle an immortal life could not be desirable? So until next time, I invite you to think that question through. If you're trapped into the prospect of immortality, what would the best kind of immortal life be like? |
YaleCourses_Philosophy_of_Death | 19_Immortality_Part_II_The_value_of_life_Part_I.txt | Professor Shelly Kagan: We've been talking about the question as to whether or not it would be desirable to live forever, whether immortality would actually be a good thing, as most of normally presume, or whether in fact, as Bernard Williams argues, it would be undesirable. The question we turned to was, just let your imagination run free. Instead of asking what would it be like to continue living a longer kind of trajectory that humans have in the real world where you just get sicker and more and more frail and incapacitated, ask yourself not whether immortality of that sort would be valuable, but is it even so much as possible to describe a life that you would want to live forever? That's the question I left you with last time. I think I already tipped my cards on this matter. I'm inclined to agree with Williams. I think that no matter how we try to fill in the blank, it's a very long blank. The crucial point here for us is that immortality means not just living a very long time or even an extraordinarily long time, but literally living forever. I think it's very difficult, indeed I think it's impossible, to think of anything you'd want to do forever. I have a friend who once claimed to me that he wanted to live forever so that he could have Thai food every day for the rest of, well, the rest of eternity. I like Thai food just fine, but the prospect of having Thai food day after day after day after day for thousands, millions, billions, trillions of years no longer seems an attractive proposal. It seems like it becomes some kind of a nightmare. And the same way that I indicated previously that although I like chocolate--I love chocolate--the prospect of having to eat more and more and more and more and more chocolate, eventually the idea becomes a sickening one. Think of any activity. Some of you may enjoy doing crossword puzzles, and perhaps doing crossword puzzles a couple hours a day is enjoyable. But imagine doing crossword puzzles every day for 10 years, 1,000 years, a million years, a billion years, a trillion years. Eventually, presumably, so it seems to me, you'd end up saying, "I'm really tired of crossword puzzles." Sure, there'd be some new particular puzzle you hadn't seen before, but you'd sort of step up a level and say, "Although I haven't seen this particular one before, I've seen crossword puzzles before. There's really nothing new under the sun here. The fact that I haven't seen this particular combination of words isn't enough to make it interesting." Well, crossword puzzles aren't a very deep subject, and we might wonder whether or not we would do better if we were engaged in something more mentally challenging than that. This may indicate something unusual about me, but I rather like math. And the prospect of having a lot of time to pursue math problems of a richer and deeper sort seems fairly attractive. Yet even there, when I imagine an eternity of thinking about math--or for that matter, an eternity of thinking about philosophy, which I obviously like even more than math--the prospect seems an unattractive one. I can't think of any activity I'd want to do forever. Now, of course, that's a bit of a cheat because the claim isn't to spend eternity doing math problems and nothing but math problems. Right now, with our 50,80, 100 years, we don't fill our day doing only one kind of activity. We fill our day with a mixture of activities. But it doesn't really help having Thai food for dinner and Chinese for lunch. Or perhaps Chinese on Monday, Wednesdays, and Fridays and Thai on Saturdays and Sundays and spending two hours in the afternoon doing math and three hours in the morning doing philosophy. That sounds like a pretty pleasant life. But again, if you think of the possibility of doing that for all eternity and never getting away from it, never being free from it, the positive dream of immortality, I think, becomes a nightmare. Well again, maybe I'm not just being creative enough in my imagination. Different former colleague of mine once talked about the prospect of having a kind of heavenly vision of the divine. Maybe that would be desirable forever. And she described it as, think of what it's like to have a really great conversation with a friend that you wish would never end, except God's this infinitely rich friend and so the conversation ends up seeming desirable forever. Well again, I can say the words, but when I try to imagine that possibility and take it seriously, at least speaking personally, it doesn't--it doesn't hold up. No friend that I've ever talked with is one that I actually would want to spend eternity talking to. And it's of course possible to say, well, just imagine a friend that you would want to talk through all eternity to. But the whole point is, I can't imagine what that would be like. When I do my best to imagine some kind of existence that would be desirable or attractive forever, it just doesn't work. It becomes a nightmare. Well again, maybe what we need to do is not so much just imagine the same cycling going through week after week after week, but whole careers. Maybe you could spend 50 or 100 years pursuing philosophy as your career. And then 50 or 100 years pursuing math as your career. And 50 or 100 years traveling around the world. And 50 or 100 years being an artist and working on your water colors, or whatever it is. Well, that certainly seems like we could probably get more time out of that. But again, the crucial point to remember is that forever is literally forever. There's no life that I'm able to imagine for myself that's one that I would want to take on forever. Well, you might say surely there could be creatures that would want to live forever, that would enjoy an eternal existence. And I think that's probably right. So scientists have learned how to do the following. There are certain--you can take a rat and put an electrode in his brain. If you put the electrode in just the right place, then when the electrode gets turned on, it stimulates the pleasure center in the rat's brain and it gets a little burst of pleasure, a pretty intense burst of pleasure. And in fact, you can take the wire, the electrode, and hook it up to a lever and teach the rat how to push the lever and give itself a little burst of pleasure. Now, what happens to rats when you do this? Well, maybe unsurprisingly, what they do is, they keep pressing the lever. Indeed, they press the lever and they'll stop eating, they're no longer interested in sex. They basically just give themselves this little orgasmic burst of pleasure--well, until they die. Now well, of course, it's too bad the rat dies, but if we imagine the rat not being mortal--perhaps you've got it on IV so it's getting its nutrients that way--then perhaps it's easy to imagine the rat simply pressing the lever forever, getting this intense burst of pleasure and being content to do that for all eternity. So if it's so easy to imagine that for the rat, why not for us? Why not just put our own orgasmitron hat on with a little--not rat lever, but now human lever--with the electrodes stimulating our own brains so that we get this intense burst of pleasure? And just imagine this intense burst of pleasure going on forever. What could be more desirable than that? Except, when I think about that, and I invite you to think about that, I don't actually find that an especially attractive prospect. Mind you, it's not that I think we couldn't be stimulated to get pleasure forever. It's that there's something that distinguishes humans from rats. When I--No doubt I would enjoy it. And no doubt you would enjoy it for a very long time. But I imagine that after a period, there'd be this--Well, humans have this ability to look down on their experiences, or step back from their experiences, and assess them. Even now, as I'm sitting here lecturing to you, part of me, because of this very question that I'm raising, is thinking about how's the lecture going, and am I going to get through what I need to get through, and so forth and so on. We can reflect on our first order or base level experiences. Now, imagine then that you're in the pleasure-making machine. After a while, part of you is going to start asking, "Huh, say, it feels the same as it was yesterday, and the day before that and the day before that. I imagine this is how it's going to feel tomorrow, and the day after that and the day after that." And eventually this question would start, "Is this really all there is to life, just simple pleasure like this?" The thing about being human is, unlike the rat, you're not just going to stay caught up in the moment. You're going to take this meta-level or higher level standpoint, look down at the pleasure and wonder, "Is this all that there is to life?" And I think eventually that question would gnaw at you and sour and override the pleasure. Eventually, you'd become horrified that you were, in effect, stuck in this rat-like existence. Of course, the human part of you is able to say there's more to you than this rat-like existence. But precisely, the human part of you is going to rebel at the unending parade of simple rat-like pleasures. So, I don't think an eternity like that would be such a good thing. Maybe it would be for a rat, but not for a human. Of course, we could perhaps deal with that problem by making us more rat-like in terms of our thinking processes. Perhaps the right kind of lobotomy would do the trick. I don't actually know exactly what it would take, but you just cut off and snip the relevant nerve endings so that we're no longer able to engage in that higher order thinking. No longer able to raise the question, "Is this all there is?" No longer able to step back from the first order pleasure. No doubt, you can turn us into creatures like rats in that way. And then, I presume, we would continue to enjoy it forever. But the question isn't really, is there something you could do to a human being so that he'd be happy, or at least enjoying himself, forever? It's rather, do you, sitting here now, thinking about that kind of life, do you want that for yourself? Do you want to be lobotomized where that's the only way, at least that we've got so far, to imagine a life that you would enjoy forever. Sure. Screw me up enough and maybe I'd enjoy being alive forever. But that doesn't mean that I now want that for myself. That doesn't seem to me to be some gift you've given me. That seems to me to be some horrible penalty you've imposed on me, that you've reduced me from being a human being, able to engage in the full range of reflection, and simply turned me into something like a rat. So again, when the question is posed, "Is there a kind of life that I or you would want to live forever?" the question I'm asking is a question to you, here, now. Is there a kind of life that you would want to live forever? Not that, if we altered you, that that product would want to have forever. I can't see how to do it. Well look, there's one other possibility. Instead of imagining lobotomizing us, turning us into rats, suppose we just say, look, the problem is really this. The problem, of course, is boredom. The problem is tedium. The problem is that you get tired of doing math after a while, 100 years, 1,000 years, a million years, whatever it is. You say, "Yeah, here's a math problem I haven't solved before, but so what? I've just done so much math, it holds no appeal to me before." Or, you cycle through all the great art museums in the world and you say, "Yeah, I've seen these Picassos. I've seen these Rembrandts before. I've gotten what there is to get out of them. Isn't there anything new?" And the problem is no, no. We've--even if there's literally new things, they're not the new kinds of things that can still engage us afresh. Well, what's the solution to that? The solution to that might be a kind of amnesia, a kind of progressive lack of memory. So here I am, 100 years, 1,000 years, around 500,000 years, whatever it may be. You're getting pretty bored with life. But if we now introduce some progressive memory, so--a progressive memory loss--so that I know longer remember what I did 100,000 years earlier. And by the time I'm a million, I no longer remember what I was doing when I was a lad of 500,000. And by the time I'm a million and a half, I no longer remember what was happening in oh, back, back. I know I was alive, perhaps. Or maybe I don't even remember that I was alive. I sort of remember the last 10-20,000 years and that's it. And while we're at it, why don't we have overhauling of your interests and desires, your tastes? So your tastes in music evolve over thousands of years. And your tastes in art or literature. Or, you like math now but then you lose your taste in math, and you become the kind of person perhaps who's interested in Chinese poetry, whatever it is. Wouldn't that do it? If we had this sort of progressive all-radical alteration, not just minimal alteration, but radical alterations of my memory, my beliefs, my desires, my tastes. Couldn't that be a kind of existence that would be forever enjoyable and yet it wouldn't be a rat-like existence? I'd be engaged in studying Chinese poetry. I'd be engaged in doing math. I'd be engaged in studying astronomy, what have you. That's far better than the rat's existence. And yet, at no point do I become bored, because, roughly, I'm so different from period to period to period. Well, I think actually, you probably could tell a story where that was true, especially if we throw in enough doses of memory loss. But this case should remind you of something, because this case is one that we've actually discussed before under the label of "Methuselah." Methuselah, remember, when we talked about personal identity, we imagined somebody--much shorter-lived at that time. I think it was several hundred years, a hundred years old, 200 years old, 300 years old, 600,700, 800,900 years old. By the time Methuselah was 900 years old, he no longer remembered anything about his childhood. What I found when I thought about the Methuselah case was, even though it was me at age 800, the same person as the one who's standing here in front of you now, it didn't really matter. I said, "So what?" When I thought about what I wanted in survival, it wasn't enough that there be somebody in the distant future that be me. That wasn't good enough. It had to be somebody with a similar enough personality. You tell me, "Oh there's going to be somebody alive. It'll be you, but it'll be completely unlike you. Different tastes, no memories of having taught philosophy and so forth." I say, that's all rather interesting from a metaphysical point of view, but speaking personally, I don't really care. It's of no interest to me to survive, and merely saying the mantra, "Oh, but it's me" doesn't make it more desirable to me. What I want isn't merely for somebody to be me. I want them to be sufficiently like me. And the problem with the Methuselah case was, go far enough out, it's no longer sufficiently like me. So I don't really care that there'll be somebody out there who's still me, if they're so unlike me now. But that's, after all, what we've just described, in imagining the cycling with the memory loss. Yeah, there'll be somebody around 100,000 years from now, 500,000 years from now, and maybe they'll be me. I don't care. It doesn't give me what I want when I want to survive. So perhaps we should put the point in the form of a dilemma. Could immortality be something worth having forever? Well, on the one hand, if we make it be me and similar to me, boredom's going to set in. The only way to avoid that is to lobotomize me, and that's not desirable. If we make--If we solve the problem of boredom setting in with progressive memory loss and radical personality changes, maybe boredom won't set in, but it's not anything that I especially want for myself. It just doesn't matter to me that it's me, anymore than it would matter to me if they just tell me, "Oh, there'll be somebody else around there who will like Chinese poetry." So, is there a way of living forever that's attractive to me? I can't think of what it would look like. So, I agree with Bernard Williams when he says immortality wouldn't be desirable. It would actually be a nightmare, something you would long to free yourself from. Now, having said that, of course, that doesn't in any way mean that it's a good thing that we die when we do at 50 or 80 or 100. From the fact that after 100,000 years or a million years or whatever it is, eventually life would grow tiresome, that hardly shows that life has grown tiresome after 50 years or 80 years or 100. I don't believe we'll--I'll--come close to scratching the surface of what I would enjoy doing, by the time I've died. And I imagine the same thing is true for you. So perhaps the best form of life would be not immortality; I think that would be not particularly desirable. And not what we've got now, where you live after a mere drop of 50 or 80 or 100 years. But rather, the best thing I suppose would be to be able to live as long as you wanted. This is the sort of thing that Julian Barnes basically imagines in the chapter "The Dream" that I had you take a look at. Barnes envisions heaven as a situation which you can stick around doing what you want to do for as long as you want to do it. But Barnes says, eventually you'll have enough. And once you've had enough, you can put an end to it. Still--the fact that we can put an end to it, that's the point we've already been flagging, that immortality would not be so good. But the new thought here is what would be good would be being able to live until you were satisfied, until you'd gotten what goods there were to get out of life. What all this suggests, then, and this is a point that I really made before, is that the best understanding of the deprivation account doesn't say that what's bad is the mere fact that we're going to die. If I'm right in thinking immortality would be undesirable, then the fact that we're going to die is good, because it guarantees that we won't be immortal, which would be a nightmare, an unending nightmare. Still, even though it's not a bad thing that we will die, it could still be a bad thing that we die when we do. It could still be the case that we die too soon. According to the deprivaton account, death is bad, when it's bad, because of the fact that it deprives us of the good things in life, insofar as we would have continued to get good things in life. But if life would no longer have anything good to offer you, if what you then would have had would have been something negative instead of something positive, then at that point, dying wouldn't actually be a bad thing. It would be a good thing. Death is bad insofar as it deprives you of a chunk of life that would have been good. Insofar as it deprives you of a future that would have been bad, then death's not actually bad, it's actually good. Now, in stating this view this way, I'm obviously presupposing that we can make these kinds of, at least, overall judgments in terms of the quality of your life, how well off you are. Is life giving you good things, or is life giving you bad things? Is it worth continuing to live, or is it not worth continuing to live? So, I want to turn to that topic and spend oh, probably the better part of a lecture or so. The rest of today and some of next time talking about the question of, well, what is it for a life to go well? How do we assess what makes a life--a good life versus a bad life? And I don't mean morally good life. I mean, good for the person whose life it is. A life that you think, "I'm benefiting from having this life." What are the ingredients or constituents or elements of a good life versus a bad life? And of course, since it's not just black and white, good life or bad life, but various shades of gray, better lives and worse lives, what's the yardstick by virtue of which we measure better and worse lives? What goes into a good life? Now, in thinking about this question, this is--you might think of the topic as the nature of wellbeing. And like all the other topics we've talked about in this class, it's a complicated subject, about which one could spend a great deal of time. All we're going to do here is really just, once again, scratch the surface. But the very first point that needs to be made, I think, is this. If you start listing all the things worth having in life, it might seem as though you couldn't possibly come to any general organizing principles. Think about it. What's worth having? Well yeah, jobs are worth having. Money's worth having. Sex is worth having. Chocolate's worth having. Ice cream's worth having. Air conditioners are worth having. What are some of the things worth avoiding? Well, being blind is worth avoiding. Being mugged is worth avoiding. Diarrhea is worth avoiding. Pain's worth avoiding. Getting unemployed is worth avoiding. War is worth avoiding. What kind of systematicity could we possibly bring to all of this? Well, the crucial, I think, first distinction is this. We need to separate between those things that are good because of what they lead to. That is, more strictly, only because of what they lead to. And those things that are valuable for their own sake or in their own right. Take something like a job. A job's worth having. Why is a job worth having? Well, a job's worth having because, well, among other things, it gives you money. All right. Money's worth having. All right. Why is money worth having? Well, money's worth having because, among other things, you can buy ice cream with it. All right. Why is ice cream worth having? Well, ice cream's worth having because when I eat ice cream it gives me this pleasurable sensation. All right. Why is the pleasurable sensation worth having? At this point, we get a different kind of answer. At this point, we say something like, pleasure is worth having for its own sake. The other things were valuable as a means, ultimately to pleasure. But pleasure is worth having for its own sake. The things that are valuable as a means we can say are instrumentally valuable. The things that are worth having for their own sake, philosophers call intrinsically valuable. If we look back at that long, open-ended list of things that were good or bad, we'll find that most of the things on that list are instrumentally good. They're good because of what they lead to. Or, for that matter, instrumentally bad. Why is disease bad? Well, among other things, it means perhaps that you can't enjoy yourself. So it deprives you of pleasure. Or perhaps it means because you're sick, you can't hold your job down. If you can't hold your job down, you can't get the money and so forth and so on. Ultimately, most of the negative things on that list were instrumentally bad. Most of the good things on that list were instrumentally good. If we want to get anyplace on the question about the nature of the good life, what we need to focus on is not the instrumental goods and bads, but rather the intrinsic goods and bads. You've got to ask yourself, "What's worth having for its own sake? What's worth having in and of itself?" Well, one natural suggestion is that pleasure is worth having for its own sake and pain is probably worth avoiding for its own sake. So pain's probably intrinsically bad; pleasure is intrinsically good. Notice by the by that logically speaking, there is nothing that stops the very same thing from being both. And actually you can get other weird combinations. You go to the dentist and he pokes you. He says, "Does this hurt? Does that hurt?" in order to try to figure out where there's gum disease. And the pain that he causes is intrinsically bad. In and of itself it's bad. Yet, for all that, it's being useful there. It's providing a means of deciding where the gum has decayed. And that allows the dentist to improve your gums, which avoids more pain down the road. So the pain you're suffering now is actually instrumentally valuable, useful as a means, even though it's intrinsically bad. Similarly, when I work, I enjoy myself. And so the pleasure I'm getting then is intrinsically good. But it's also instrumentally good. The fact that I'm enjoying myself makes it easier for me to work harder. Perhaps I'm more productive, I do better at my job. So the pleasure is both intrinsically valuable and instrumentally valuable. So there's no claiming that things have to be one or the other, but not both. Still, in trying to get clear about the nature of wellbeing, the crucial thing to do is to focus not on the question about instrumental value, but rather to focus on the question of intrinsic value. What things are worth having for their own sake, whether or not those things also have instrumental value, or what have you? What things are worth having for their own sake and what things are worth avoiding for their own sake? Well, in giving these examples, I've already indicated at least two things that belong on the list. It seems pretty plausible to think pleasure is intrinsically good. One thing, maybe not the only thing, but at least one thing, that goes into a life worth having is enjoying it, is pleasure. And one thing that seems intrinsically bad, one thing that seems to reduce the value of a life, is pain. Most of us agree then, pleasure is intrinsically valuable; pain is intrinsically negative, unvaluable, has anti-value. Well, suppose we make for, the moment, the bold conjecture, the philosophical claim, that not only is pleasure and the absence of pain, not only is pleasure one good thing and pain one bad thing. Suppose that's the entire list. Suppose we conjecture that the only thing intrinsically valuable is pleasure and the only thing intrinsically bad is pain. That view is called hedonism. So hedonism is a view that many people are attracted to, perhaps some of you believe. It's got a very simple theory of the nature of wellbeing. Being well off is a matter of experiencing pleasure and avoiding or minimizing the experience of pain. That's hedonism. A little later we'll turn to the question of, well, if hedonism is not the right story, what else belongs on the list, or is it the right story? We'll turn to that question a little bit later. But notice that if we've got hedonism or, for that matter as we'll see, some other theories of wellbeing, if we've got hedonism, we're able to make the kinds of evaluations that I was helping myself to when I started talking a few minutes ago about well, you know, if what life would hold for you is bad overall, then you're better off dying and so forth. What's going on when we make those judgments? Well, the hedonist offers us a very simple straightforward answer. In deciding whether what life holds for you is worth having, better than nothing, you, roughly speaking, add up all the good times and subtract all the bad times and see whether the net balance is positive or negative. Add up all the pleasures, subtract all the pains. If the balance is positive, your life is worth living. And the more positive the balance, the bigger the number, the more your life is worth living. If the balance is negative though, think about what that would mean. If the balance was negative, you're saying your future holds more pain overall than pleasure. And that's a negative. You'd be better off, well, you'd be better off dead, right? Because if you were dead, you'd have neither pleasure nor pain. That's going to presumably be given--mathematically if we gave it a number, we'd slap a zero on that. No positive number, no negative number. That's a zero. Obviously, if the balance of pleasure over pain is positive, that's better than zero. But if the balance of pain over pleasure--If there's more pain than pleasure so that the balance is negative, that's worse than zero. That's a life not worth having. That's what the hedonist says. Now, there's different ways of working out the details of the hedonist view. It's not, after all, as though all pleasures count equally or all pains count equally. The pain of stubbing your toe obviously doesn't count for nearly as much as the pain of a migraine, which doesn't count nearly as the pain of being tortured. And so we might need to work out various, more complicated, formulas here, where we multiply the pain times its duration and take into account its intensity, get the sheer quantity of pain that way. And similarly, pleasures can be longer lasting, or more intense. You can imagine how some of those details might go, and then some of the questions get rather tricky. But for our purposes, we don't really need to worry about the details. The thought is, roughly, weigh up the pleasures and pains in some appropriate way. Add up the pleasures. Add up the pains. See whether the grand total of pleasures is greater than the grand totals of pain. The more positive the number, the better your life. Now, armed with an approach like this, we can do more than just evaluate entire lives. Well, one thing we can do is just that. We can evaluate entire lives. There you are at the pearly gates and you look back on your life and you could, in principle, add up all the pleasures, add up all the pains, subtract the pains from the pleasures and ask yourself, "How good a life did I have? How well off was I, having lived that life?" And perhaps then you could imagine alternative lives. If only I had chosen to become a doctor, instead of having chosen to become a lawyer. How much better off or worse off would I have been? Or if I decided to become an artist or a scholar or a beach bum or a farmer, how much better off or worse off would I have been? How much greater or smaller would the number go? Despite my talking about numbers, of course, there's no particular assumption that we can really give precise numbers to this. And we certainly don't think that in fact most of us are in the position to actually crank out any kind of accurate number. Most of us don't know enough to know with a high degree of accuracy how things would have gone had I decided to become a farmer instead of a philosopher. Still, the hedonist isn't saying from a practical point of view we can necessarily do this. But in theory, in principle, this is what we're wondering about when we face choices. We can ask ourselves, "What would our life look like? Would it be better or worse?" And the yardstick that we're at least doing our best to apply is one of measuring up the pleasure and subtracting the pain. And of course, the hedonist will also hasten to point out that just because we can't do this perfectly or infallibly, that doesn't mean we can't make educated guesses, right? You're trying to decide should you go to Yale for college or should you go to Ohio State or Harvard or wherever else you got into, and you ask yourself, well, you try to project your future and you ask, "Where do I think I'd be better off? Which of these branches that are available to me, the branches of my life story, which is the one in which the future from here on out holds more pleasure and holds less pain?" That's how the hedonist says we should think about it. And notice by the by, that when we make choices about our future, from the hedonist point of view, at least, there's no particular need to dwell a whole lot on the past, because what's done is done. You're not going to alter how much pleasur you enjoyed previously, how much suffering you've undergone previously. What's open is the future. And so we're able to evaluate not just lives as a whole, looking back at the pearly gate. We're able to evaluate lives from here on out. Which of the various futures that are open to me are likely to give me the better life, leave me better off, measured in terms of pleasure or pain? And we do our best, however good or bad that may be. We do our best to make such comparative evaluations. And of course, we can do more than just evaluate the entire rest of my life. We can evaluate the next year or the next six months or, for that matter, just this evening. I can talk about, well, what should I do tonight? Should I stay home and work on my paper? Should I go to the party? Where will I be better off tonight? Well, I'll probably enjoy myself more at the party than I will working on the paper. And the paper's not due for a while and so forth. We make evaluations not just of entire lives, but of chunks of lives. All right. That's what we can do if we accept hedonism. But haven't yet asked, should we accept hedonism? Now, it will not come as news to me if I were to learn that several of you, maybe even many of you, in this class accept hedonism. It's a very popular view. Not just among philosophers where it's a view that's been around as long as there's been philosophy, but among people in the street. It's a very tempting view to think, what makes life worth having and the only thing worth having for its own sake, is having pleasure and avoiding pain. But for all that, despite the popularity of that view, I'm inclined to think it must be wrong. It's not that I think pleasure isn't good and pain isn't bad. Where hedonism goes wrong is when they say it's the only thing that matters. I'm inclined to think there's more to the best kind of life than just having pleasure and avoiding pain. Now, I already revealed that, when I was talking about the rat lever machine. I said, hook me up to that machine and I'll enjoy myself. But I don't want that for myself. Why? Because there's more to life than just pleasure and the absence of pain, or so it seems to me. Still, we might say, but the rat lever is not the only kind of pleasures there are. There's all these pleasures of experiencing art and seeing a beautiful sunset. And I don't know about you, but at least when I imagine the rat lever thing, it's a sort of simple, undifferentiated pleasure. So, that really won't do the trick in giving us the best quality pleasures of the kinds that humans most crave--the pleasures of friendship and discussion and sexual intimacy. These pleasures the rat lever machine wasn't giving us. So couldn't hedonism still be true? Couldn't it still be the case that as long as we take into account the importance of getting the right kinds of pleasures, then really pleasure is what it's all about and all that it's about? No, I think that's still not right. But indeed, we'll need to move to something fancier than the rat lever machine. Here, the relevant thought experiment was suggested by Robert Nozick, a philosopher who died a few years ago, taught for many years at Harvard. Nozick invited us to imagine an experience machine. So, suppose that the scientists have discovered a way not just to stimulate the particular little pleasure center of the brain, but basically to--give you basically, completely realistic virtual reality. So that when you are hooked up to the machine, it seems to you exactly the same on the inside as it would seem to you if you really were--and now fill in the blank. You could have the identical experience of climbing Mount Everest, let's say, so that you'll feel the wind bracing you. Of course, you won't really feel any wind. Strictly speaking, that's not true, because you're not up on Mount Everest. There is no wind. What's really going on is you're floating in the psychologist's tank in their lab with the electrodes hooked up your brain. But you don't know that you are floating in the tank. Hooked up to the machine, you believe you are climbing Mount Everest. You feel the thrill of having made it to the top and the wind bracingly striking your chin and you feel the satisfaction and you've got the memories of having almost died when the rope broke before. It's not like being at the IMAX. The crucial point, when you're at the IMAX is, although it's very realistic, part of you is aware that you're just in the theatre. But on the experience machine, you don't know you're just in the lab. When you're on the experience machine, you've got--your brain is being stimulated in such a way that you've got the identical experience on the inside to what it would feel like if you really were doing these things. So, imagine a life on the experience machine. Imagine plugging in the tape. Says something about how old this example is that we talk about plugging in the tapes. Imagine plugging in the DVD, or whatever it is, with all of the best possible experiences. Whatever you think those are. Here, you might imagine different people disagreeing about--oh, but throw in something--but if what you want to do is write the great American novel, then you've got the experience of staying up late at night not knowing how to make the plot work out, crushing pieces of paper and throwing them away. Crushing your computer, or whatever it is that you do as you write the great American novel. Or you want to be finding the cure for cancer. So you've got exactly the experience you would have if you were working in your lab having the brilliant breakthrough when you finally realize what the combination is that would make the right antibody, whatever it is. Or if you want to be observing all the most beautiful sunsets and the most exotic locales, you've got exactly the experience you would have if you were doing all these things. That's life on the experience machine. You're not doing any of it. You're floating in the lab. But the experiences are identical. Now, ask yourself then, would you want to spend your life hooked up to the experience machine? Ask yourself, how would you feel if you discovered now that you have been living your life hooked up to an experience machine? Now, I've got to make a footnote here. This perfectly glorious philosophical example has been ruined in recent years by the movie The Matrix. Because whenever I tell this story now, people start saying, "Oh, well the evil machines are busy using your body as a battery" or whatever it was in the movie, right? And "What if people are nefariously feasting on my liver while I'm having these little experiences?" Don't imagine any of that. It's not that the evil scientist is just deliberately deceiving you so as to conduct his nefarious experiments. Nothing like that. And similarly, while we're at it--this is not a Matrix-like worry--if you're worried about, yeah, but what's happening to world poverty while I'm doing all of this? Just imagine that everybody's hooked up to experience machines, but everybody's got the best possible tapes. Now you ask yourself, what I'm asking you to ask yourself, is would you want to spend your life hooked up to the experience machine? I'm not talking about, wouldn't it be interesting to try it out for a week or a month or even a year? And indeed, the question, strictly speaking, isn't even would life on the experience machine be better than it is now? Although it would make me very, very sad to discover this, I suppose it's possible some of you have such bad lives that moving on to the experience machine would be a step up. That's not the question. The question is, does life on the experience machine give you everything worth having in life? Everything worth having in life. Is it the best possible form of human existence? According to the hedonists, the answer's got to be, it has to be "yes." Life on the experience machine is perfect, as long as you've got the right tape plugged in. So, you've got the best possible balance of wonderful pleasures and wonderful, fantastic experiences, since that's all there is to human wellbeing. By hypothesis, the machine is giving us that. There couldn't possibly be anything more. There couldn't possibly be anything missing. But when I think about the question, would I want to spend my life hooked up to an experience machine? the answer is "no." And I imagine that for most of you, when you ask yourself, would you want your entire life to be spent hooked up to the experience machine? your answer is "no." But if the answer is "no," then that means hedonism's got to be wrong. If life on the experience machine is not everything, then there's more to the best possible life than getting the insides right. The experience machine gets the pleasures right, gets the experiences right, gets the mental states right, it gets the insides right, but if life on the experience machine isn't all that's worth wanting out of life, then there's more to the best possible life than getting the insides right. What we've got to turn to next time, then, is the question, what else might it be? Okay. |
YaleCourses_Philosophy_of_Death | 16_Dying_alone_The_badness_of_death_Part_I.txt | Professor Shelly Kagan: --Tolstoy's Ivan Ilyich is surprised to discover that he's going to die. It's the sort of thing he's given lip service to, no doubt, over the course of his life. But when he finally gets ill and comes up to the fact of his mortality, that his body is going to sicken and eventually die, the fact of his mortality seems to shock him, seems to surprise him. We might say, on one level he believes that he was mortal. He's believed it all along. But at another level, at some deeper level, it comes as a surprise to him. He never really believed it. Now, I take it that we find Ivan Ilyich a perfectly believable example. That is, we think it's conceivable that somebody could, at some level, not really believe they're going to die. But I also take it that Tolstoy means to be putting forward more than just a claim that there could be such a person. "Look how bizarre he is. Let me describe him for you." But rather, the suggestion is meant to be that Ivan Ilyich's case is rather typical. Maybe all of us are in his situation, or at least most of us are in his situation. Or, at the very least, many of us are in his situation. That's a stronger claim, though I think it's not the sort of claim that's unique to Tolstoy, that all of us or most of us or many of us at the fundamental level don't really believe that we're going to die. You might ask, what kind of evidence could be offered for that? Offering a realistic scenario, a realistic description of such a person--Ivan Ilyich--doesn't give us any reason to think that most of us or many of us are in his situation. So, is there any reason to think that? You might ask, what kind of argument could be offered for a claim like that? What we'd be looking for, I take it, would be some kind of behavior on our part that calls out for explanation. And the best explanation is to be had--This is how the argument would go. The best explanation is to be had by supposing that those people who behave this way--Let's suppose many of us who behave this way. The best explanation of that behavior is to be found by claiming that at some level, at some fundamental level, we don't really believe what we claim to believe. We don't really believe what we give lip service to. Take somebody who perhaps suffers from some sort of compulsion to wash his hands. We ask him, "Are your hands dirty?" He might say, "No, of course not." And yet, there he is, going back to bathroom, washing his hands again. You might say, the only way to explain the behavior is to say that at some level, he really does believe his hands are dirty, despite the fact that he says they're not. Well, in the same way, if we could find some behavior on our part that calls out for explanation, that the best possible explanation would be that at some level we don't believe we're going to die, then we might say, look, this gives us some reason to think that we don't really believe we're going to die, even though we say we believe it. Suppose, for example, that if you really did believe, fundamentally, unconsciously, all the way down--however we should put it--if you really did believe you were going to die, the horror of that would lead you to start screaming and just keep screaming. Of course, this example reminds us again of Ivan Ilyich, who screams and screams and screams almost till his death. Well, suppose this was true. Suppose that if you--Suppose we believed--we had good reason to believe--if you really took seriously the thought that you were going to die, you couldn't stop screaming. But of course, nobody here is screaming, from which we can conclude none of us really do believe, fundamentally, deep down, that we're going to die. That would be a good argument if we had good reason to believe the conditional, the if-then claim. If only you really truly believed you were going to die, you would scream and scream and scream. That's the crucial premise. And of course, we don't have any good--as far as I can see--we don't have any good reason to believe that crucial premise. You might ask though, is there some other behavior, something else that should tip us off, could tip us off, as to whether or not we really do or don't believe that we're going to die? Well, here's the best that I can do. This strikes me as the most plausible contender for an argument like this. As we know, there are people who have brushes with death. They might be, for example, in an accident, and come close to being killed, but walk away without a scratch. Or suffer a heart attack and be on the operating table for some number of hours and then, thanks to cardiac surgery or what have you, be resuscitated. When people have these near brushes with death, it's easy to believe that the fact of their mortality is more vivid. It's more before their mind's eye. It's something that they now really truly do believe. And the interesting point is many people who have this sort of experience, for whom their mortality has become vivid, they often say, "I've got to change my life. I need to spend less time at the office and more time with my family, telling the people that I love that I love them, doing the things that are important to me, spend less time worrying about getting ahead, making money, getting the plasma TV," whatever it is. Let's suppose that this is true of all of us, or at least most of us. When we find the fact of our mortality is made vivid, when we really truly can see that we're mortal, then we change our priorities, stop giving all the time and attention to trying to get ahead in the rat race and spend more time with our loved ones doing what's important to us. Suppose that claim were true. Well, armed with that claim, we might notice, well look, of course, most of us do spend a lot of time trying to get ahead, trying to earn a lot of money, don't spend the bulk of our time doing the things that we really truly think are most important to us, don't tell our friends, don't tell our family members how much they mean to us, how much we love them. What are we to make of that fact? Well, maybe the explanation is, although we give lip service to the claim that we're mortal, at some more fundamental level, we don't truly believe it. The belief's not vivid for us. We don't believe it all the way down. Well, this is an argument--at least it seems to me--that has some chance of being right. I'm not at all convinced that it is right. But at least it doesn't seem to be the sort of argument, unlike some of the arguments I've considered last time about oh, nobody believes they're going to die because you can't picture being dead or what have you. This argument, I think, has some possibility of being right. It does seem as though people who have brushes with death change their behavior in significant ways. The fact that we don't behave in those other ways gives us some reason to believe that perhaps at some level we don't completely or fully or fundamentally believe we're going to die. As I say, I'm not sure whether that argument's right. But at least it's an argument worth taking seriously. Let me turn now to a different claim that sometimes gets made about death. This is the claim--not that nobody believes they're going to die; that's the one we've been talking about for the last lecture or so--but instead, the claim that everybody dies alone. This sounds like one of those deep insights into the nature of death. It's got that kind of air of profundity about it that philosophy's thought to have or aspires to have. Everyone dies alone. This is telling us something deep and important and interesting about the nature of death. Now, as it happens, this is one I'm going to be completely dismissive of. I think, as far as I can see, that the claim "we all die alone," however we interpret it, just ends up being implausible or false. I give it such a hard time each time I teach this class, that I'm often tempted to just drop it from the discussion altogether. Even though, if you've done the reading of the Edwards paper that I assigned, you have a series of quotes from Edwards in which people say things like, they die alone. I sometimes come away after this discussion thinking, "Why am I wasting our time? Nobody really believes this, that we all die alone." Last year I was virtually ready to drop it and then, I kid you not, that very afternoon, I came across a quote. I'll share this with you in a second. Somebody saying, "Oh, we all die alone." And then I think it was two days later, a week later, I came across another quote of somebody saying, "Oh, we all die alone." It made me think, "Oh, I guess this is a common enough thought." So here are the two quotes. But I think once you start looking for them, you find them everyplace. This first one is from the folk singer Loudon Wainwright III, from his song Last Man on Earth. "We learn to live together and then we die alone." We die alone. Interesting claim. It seems to say something important about the nature of death. Here's another quote. This is from the children's book, Eldest by Christopher Paolini, the sequel, of course, to the bestseller Eragon. "‘How terrible,' said Eragon, ‘to die alone, separate, even from the one who is closest to you.'" The answer given to Eragon, "Everyone dies alone, Eragon, whether you are a king on a battlefield or a lowly peasant lying in bed among your family, no one can accompany you into the void…" Everyone dies alone. As I say, this is a common enough view. Two quotes. I could certainly produce others. Everyone dies alone. The trick--The question we're going to ask is, can we find some interpretation of that claim under which, first of all it ends up being true, secondly, it ends up being a necessary truth about death? Suppose everyone happens to die on Monday, due to some cosmic coincidence. It might be sort of interesting, but it wouldn't tell us something deep about the nature of death, if people could just as easily die on Tuesday. If it happened to be that everybody dies in a room by themselves, that would be interesting. We might wonder what causes it. But it wouldn't be some deep insight into the nature of death. We're going to get a deep insight if it's a necessary truth about death that everyone dies alone. So it's got to be true. It's got to be a necessary truth. And, of course, it's got to be an interesting claim. If, when we interpret the claim "everyone dies alone," that just ends up being a slightly pretentious way of saying everyone dies, we might say, oh yeah, that is true and it is a necessary truth, but it's not especially surprising. It's not some deep surprising insight into the nature of death. We all knew everyone dies. You take that familiar fact and you wrap it up in the language "everyone dies alone." If that's all you're saying, you're not saying anything interesting. When people say, "You know, everyone dies alone," you're supposed to be gaining some deep insight into the nature of death. Finally, "everyone dies alone" is supposed to say something special about death. It better not be that everyone does everything alone, because--in whatever the relevant sense of alone turns out to be--if everyone does everything alone, then of course that might be interesting. It might be very important and insightful, but you're not saying anything especially interesting about death when you say everyone dies alone, if it's also true that everyone eats their lunch alone. So, all this is, is just, as we begin to ask ourselves, what could it possible mean when people say "everyone dies alone"? we're looking for something that's true, necessary, interesting and, if not unique to death, at least not true of everything. I put these conditions down because, of course, what I want to suggest is although the sentence "everyone dies alone," the claim that everyone dies alone, is one of these things that people say, they're not really thinking very hard about what they mean by it. Because once you actually push people, to pin them down, what do you mean by it, you end up with something that's either just not true, or not interesting, or not necessary, or not particularly unique to death. Take a possible interpretation. The most natural, straightforward, literal, flat-footed interpretation. To say that somebody does something alone means they do it not in the presence of others. Somebody who lives by himself goes to sleep. If there's nobody else in the bedroom, he's sleeping alone. On that straightforward interpretation, to say that everybody dies alone, what we're saying is that it's true of each one of us that he or she dies not in the presence of others. If that was true, it would be sort of surprising, striking. We might wonder whether it's a necessary truth. But at least there'd be something interesting there. But of course, it's not true. We all know full well that sometimes people die in the presence of others. We read earlier this semester Plato's Phaedo, which describes the death scene of Socrates. Socrates drinks the hemlock and dies in the presence of his friends and disciples. Socrates does not die alone. And of course, we know that there are many, many other cases in which people die in the presence of their friends, family, loved ones. It's just not true, given that interpretation, to say we all die alone. So if that's what the claim means, it's false. Our challenge is to find some other interpretation of the claim. All right, second possibility. When people say "everyone dies alone," they don't mean to be saying you die, but not in the presence of others. They mean to be saying rather, even if there are others around you, even if there are others with you, dying is something that you're doing alone. They aren't dying. Socrates' friends and disciples are not dying. He's the only one dying. And so everyone dies alone in that sense. Well, that's an interesting claim, if it's true, but it's not true. We could certainly have battlefields in which many people are dying along with others. There is Jones dying, but he's not dying alone. There's Smith dying at the same time right next to him. If that's what people mean when they say "everyone dies alone," then that's clearly false as well. I presume that's not what people meant either. But then what was it that they did mean? Well, we could do better. We could say, look, when Socrates dies, he's dying alone in the sense that he's doing it by himself. He's not doing it in cooperation with anybody else, in coordination with anybody else. On the battlefield, even if Smith and Jones are both dying, it's not like this is some sort of cooperative, joint undertaking. You could be walking down the sidewalk and Linda could be walking down the sidewalk and even though you're both walking down the sidewalk, you're not walking down the sidewalk together. In contrast, you can walk down the sidewalk with somebody. Say, "Hey, let's go to the library." And you walk down the sidewalk together. Walking is something you can do with others, in the sense that it can be a joint activity, a joint undertaking. Perhaps the claim then is that dying is not something that can be done in that way as a joint undertaking. Even if you're in a room or a battlefield where people are dying at the same time as you, to your left and your right, dying is not and cannot be something that is a joint undertaking. Well, that might be a proposal about what people mean when they say "everybody dies alone." And if it is, all I can say is, again, it just seems to be false. Now admittedly, dying as a joint undertaking is far rarer than dying alone. But for all that, we were looking for some deep insight into the nature of death. Everyone dies alone. Everyone must die alone. That's only going to be true if dying as a joint undertaking is impossible. But it's not impossible. You could have, for example, some sort of suicide pact. There have been cases, gruesome as they may be, in which entire groups of people drink poison together so as to die not alone, but die together, die as part of jointly dying, dying as a group. Or you could have, once told that this sort of thing happens, a couple in love who together jump off the cliff, committing suicide together, dying not alone but with each other as part of a joint undertaking. It certainly seems possible. I take it cases like this actually do occur. So if somebody comes along and says "No, no, everybody dies alone, and dying as part of a joint undertaking is impossible," they're just saying something false. These joint undertakings are like, well, you might think of them analogous to playing chamber music with a string quartet. It's something you're doing with others. It's not just a coincidence that they're doing it at the same time. All these people happen to be playing the violin, viola, or what have you next to you. No, no, we deliberately coordinated with one another so as to together produce this music. It seems possible in the case of string quartets. It seems possible in the case of joint suicide pacts as well. Well, a fan of the claim that we all die alone might come back and say, "Well, in the case of the string quartet, although it's true that I am playing with others, somebody could take my part. Somebody else could play the second violin part for me. Whereas, in contrast, when I die, even if I'm dying with others, nobody can take my part." So perhaps that's what the claim is meant to be when people say, "everybody dies alone." Nobody can die your death for you. Nobody can take your part. Now if that's what they mean, then--a small observation--they didn't express themselves very clearly. It seems to me rather a long distance from the thought, "nobody can die for me, nobody can take my part," to the claim, "everybody dies alone." That seems a rather misleading, unhelpful, way of making your point. But let's just bracket that complaint. It is true that nobody can take my part? Certainly people can take my part in the string quartet. Is it true that nobody can take my part in terms of my death? Not so clear it is true. I don't know how many of you have read Tale of Two Cities. If not, I'm about to spoil the plot for you. Here's at least a strand of the story. The hero of the story is in love with a woman who--alas and alack--does not love him. She loves another man. This other man--alas and alack--has been condemned to death during the French Revolution. Now as it happens--this is a novel--as it happens, our hero looks rather like the other man. And so as the other man is being carted off to the guillotine to be killed, our hero takes his place. Hence, the famous speech, "Tis a far, far better thing I do today." Our hero sacrifices himself so that the woman he loves can have the man that she loves. Well, for our purposes, the romance isn't crucial. For our purposes, the crucial point is to see that what seems to be going on there is our hero is taking the place of somebody else who's about to die. Just like somebody could take my place in the string quartet, it seems that somebody could take my place at the guillotine. In the American Civil War, there was a draft, but you could avoid it by hiring somebody to take your place, if you were rich enough. Well, you're in some battle, or rather, your troop is in some battle, and people are being killed left and right. Well, I suppose it doesn't strike me as an implausible thing to say that if everybody in the troop got killed and you would have gotten killed had you been there, but instead, the person you hired to take your place gets killed, then he took your place. He substituted for you in the death. So again, we don't have any clear, true interpretation of the claim that nobody can take my place, even with regard to dying. Well, easy to imagine the fan of this view coming back yet again and saying, "Although it's true that our hero takes the place of the other man on the guillotine, what ends up happening, of course, is that our hero dies his own death. He doesn't take over the death of the other man. The death of the other man doesn't take place until 20,30, 40, whatever it is, years later. Nobody can take my place at my death. Because, of course, if they take my place, they end up living or going through, rather, their death not my death. My death is something that only I can undergo. Now again, that's an interesting claim if it's true. At least it seems to be an interesting claim. It seems to say something interesting about death. Again, I want to just notice that it's a rather odd thing to try to express that point in the language "everyone dies alone." But just bracket that. Have we at least found something interesting, necessary, unique to death when we say, "Nobody can die my death for me. I am the only one who can undergo my death"? Each of us must undergo his own death and nobody else's death. Nobody else can undergo their death for them, somebody else's death for them. Well, that does seem to be true and it seems to be a necessary truth. But we're not quite done. Is it saying something deep and interesting about the nature of death? Is it something that's fairly unique to the nature of death? That nobody can die my death for me. Actually, I don't think it is. Consider getting your hair cut at the barber. Now of course, somebody else can take your slot. All right, there's somebody who comes along and says, "Oh, I need to get to a date. I'm going to be late. Would you mind my having your appointment, using your appointment?" "Oh, I'm willing to wait. It's okay," right? So you might say, in some loose sense they've gotten your haircut. But of course, as it ended up, they didn't really get your haircut. They got their haircut. Think about haircuts. Nobody can get my haircut for me. I'm the only one who can get my haircut. If somebody else tries to get my haircut, they just end up getting their own haircut. Of course, it's not just special about haircuts. Talk about getting your kidney stones removed. Nobody else can get my kidney stones removed for me. I'm the only one who can get my kidney stones removed for me. Think about eating lunch. Nobody can eat my lunch for me. If somebody else tries to eat my lunch, they end up--it becomes their lunch. They've eaten their lunch for themselves. Nobody can eat my lunch for me except for me. If you think about it, it's true about just about everything. Maybe indeed everything. If you emphasize the word "my" enough, nobody can do much of anything for me and still have it be my such and such. In short, even though it's true that nobody can die my death for me, this isn't some deep insight into the special nature of death. It's just a trivial grammatical point about the meaning of the word "my." All right, remember where we're at. We're looking for interpretations of the claim "everyone dies alone." And by now we've gone rather far afield in the search for an interpretation of that claim. But we have not yet been able to find a claim, an interpretation, which is true, interesting, fairly special about death, as opposed to trivially true about everything, and giving us some relatively interesting insight into the nature of death. I can't see it for the claim "everyone dies alone." At least not if we try to take these claims fairly literally or take them to be metaphysical claims about the nature of death. But maybe I've just been flatfooted here in thinking that this is some sort of claim about not being with others, or things I do by myself. Maybe the claim "we all die alone" is intended as a kind of metaphor. It's not that we all really do die alone. It's that when we die, it's as though we were alone. It's like being alone. Maybe the claim "we all die alone" is a psychological claim, that the psychological state we are in when we die is similar to loneliness. It's similar to the feeling of being alone that we have in various situations. Now, that would be interesting if it was true. Is it true that when we die we all die having this feeling of loneliness, or perhaps feeling of alienation? It's easy enough to imagine somebody who is surrounded by other people as he's dying. And yet, for all that, feels removed, distant, alienated from the others, feels lonely even in the crowd. Is that true of all of us? Remember, we're looking for a claim that says, that makes it true, that everyone dies alone. Is it true that everyone dies feeling distant and removed? Maybe it was true of Ivan Ilyich. Ivan Ilyich progressively grows more and more distant from his family and friends who, indeed, remove themselves psychologically from him. He faces his death with a feeling of alienation and being alone. It's a metaphor, but still an important insight into his psychology. The question we have to ask is, "Is that true of everybody? Is it true that everybody dies alone in this psychological sense?" It doesn't seem to be true. First of all, notice the obvious point that sometimes people die in their sleep, unexpectedly. They weren't ill. They just die of cardiac arrest while they're sleeping. Such a person presumably is not feeling lonely or alienated while he dies. Well, you might say, "Okay, what we meant was anybody who's awake while they're dying, dies alone." That's not true either. You're crossing the street, talking to your friend, engaged in lively discussion. So lively, you don't notice the truck that's about to hit you. The truck hits you, you die, painlessly and immediately. Well, were you feeling alienated and distant during your final moments? No, it doesn't seem right either. So it certainly doesn't seem true to say that everybody dies feeling these psychological feelings of loneliness. Well, maybe what we should have to do is revise the claim yet again. Everybody who dies awake, realizing that they're dying, facing the fact that they're dying, they all, we all of whom that's true, we all die alone, as long as we realize we're dying. That would take care of the sleep case. That would take care of the truck case. Is the claim true then? It would still be interesting if it was true, even given those restrictions. But it doesn't seem true then either. Again, just recall Socrates. Socrates is engaged in philosophical discussion with his friends, knows he's about to die. He's drunk the hemlock. He's sitting there saying goodbye to everybody. He doesn't seem alienated. He doesn't seem to be feeling distant and alone. It just doesn't seem true that everybody who knows they're going to die and is facing their death feels lonely. Another example of this is another philosopher, David Hume, whom we'll be reading at the end of the semester. We'll be reading his essay on suicide. Hume died, had an illness. He was quite sociable to the end. He used to bring people in to sit around his deathbed talking about various matters with him. He was cheerful and pleasant to the end. And there's, as far as I can see, no reason at all to believe that he was feeling lonely, feeling distant, feeling alienated from the people who were keeping him company. So the psychological reading doesn't do any better, as far as I can see. Well, maybe there's some other interpretation, and I invite you to reflect on the question. Is it true that we all die alone? Is there some way of understanding that claim where it's true, a necessary truth, fairly special and unique, if not altogether unique, at least fairly special about death, showing us some deep insight into the nature of death--as opposed to some trivial insight about the way the possessive first person pronoun "my" works? I can't find it. So despite the fact that the claim "we all die alone" is one of these things that one hears, I think it's just nonsense. I think it's people talking without giving a moment's thought to what they meant when they said it. All right, where are we? For the first half of the course, we've been engaged in metaphysics, broadly speaking. We've been trying to get clear about the nature of the person, what we're composed of, so that we could then try to get clearer about the nature of survival and identity of persons, so that we could think about the nature of death, metaphysically speaking. What happens when we die? And as you know, I've defended the physicalist conception, according to which all we are are just bodies capable of doing some fancy tricks, capable of P-functioning. And details aside, death is a matter of the body breaking, so that it's no longer able to engage in P-functioning. As we saw, depending on the particular details of which theory of personal identity you accept--the body view, the brain view, the personality theory of personal identity--we might have to say slightly different things about whether the death of my body means I no longer exist, whether we should distinguish the death of the body, the death of the person, and so forth. But those details aside, roughly speaking, the following is true. When the body breaks, I cease to exist as a person. And even if we can hold out the logical possibility of my being resurrected--or my continuing to exist with a different body as long as it's got my personality, if you happen to accept the personality theory--even though there is the logical possibility of surviving my death or coming back to life, I see no good reason to believe that those logical possibilities are actual. As far as I can see, when my body dies, that's it. As a fan of the body view, I believe I'll still exist for a while. I'll exist as a corpse. But that's not the kind of thing about existence that mattered to me. In terms of what mattered to me, what I wanted was not just that I exist, but that I be alive, indeed be a person, indeed be a person with pretty much the same personality. And the truth of the matter is, when my body dies, that's all history. That's where we're at in terms of the metaphysics. We could summarize this by saying, when I die, I cease to exist. That's a little bit misleading, given the view I just sketched where even though I'm dead I still exist for a while as a corpse. But those issues won't concern us in what we're about to turn to. Let's just suppose that, for the sake of avoiding those complications, that when my body dies, it gets destroyed. And so the very same moment will be the end of my body, the end of my existence, the end of my personhood. Let's suppose that my personality doesn't get destroyed any sooner than the death of my body. We've got the end of my existence. Here I am going along. The atomizer comes along, blows me up. Then simultaneously, we've got the death of my person, the death of my body, the end of what matters to me, the end of my existence. Death is the end. And even though these things can come across--can come apart slightly under certain scenarios, those details won't matter for what we're about to turn to. Well, what are we about to turn to? We're about to turn to value theory. We spent the first half of the semester, you might say, trying to get clear about the metaphysical facts. And now that we've done that as best we can, we want to turn to the ethical or value questions. How good or bad is death? Why is--I take it, we all believe death is bad. Why is death bad? How can death be bad? So this is the big continental divide for the course. The first half of the class was metaphysics. Now we turn to value questions. And the first question we're going to be focusing on is just this, the question of the badness of death. How and in what ways is death bad? I take it, most of us do believe that death is bad. That's why we wish--maybe some of us believe, but at the very least the rest of us, many of us hoped--there were souls, so that death wouldn't have to be the end. If death is the end, that seems to be horrible. So we're going to turn to questions like this. How and in what ways is death bad? And then we're going to turn to the question, is it really true that immortality would be good? And eventually, we'll turn to some other value questions about if death really is the end, should we be afraid of death? I take it that fear of death is quite common. But we can actually evaluate different emotions and think about whether these emotional responses are appropriate or not, so we can ask whether or not fear of death is appropriate. We'll turn eventually to the question, how should we live in light of the fact that death is the end? And the last question we'll turn to is, could it ever make sense to kill ourselves? So these are the kind of moral or value questions we'll be concerned with until the end of the term. But the first one is simply, is death bad, as we typically take it to be, and, if so, what is it about it that makes it bad? So again, I'm going to suppose here on out that the metaphysical view that I've been sketching is right; that physicalism is true. The death of my body is the end of my existence as a person. Death is my end. Well, if that's right, how can it be bad for me to die? After all, once I'm dead, I don't exist. If I don't exist, how can it be bad for me that I'm dead? It's easy to see how you might think, how you might worry about the badness of death, if you thought you would survive your death. Now, if you believed in a soul, then you might worry about, well, gosh what's going to happen to my soul after I die? Am I going to make it up to heaven? Am I going to go to hell? You might worry about how badly off you're going to be once you're dead. The question makes perfect sense. But it's often seemed to people that if we really believe that death is the end--and that's the assumption that I'm making here on out--if we really believe death is the end, how can death be bad for me? How could anything be bad for me once I'm dead? If I don't exist, it can't be bad for me. Well, sometimes in response to this thought, people respond by saying, "Look, death isn't bad for the person who's dead. Death is bad for the survivors." John's death isn't bad for John. John's death is bad for the people who loved John and now have to continue living without John. John's death is bad for John's friends and family. When somebody dies, we lose the chance to continue interacting with the person. We're no longer able to talk with them, spend time with them, watch a movie, look at the sunset, have a laugh. We're no longer able to tell our troubles with them and get their advice. We're no longer able to interact with them. All that's gone, when somebody dies. And the claim might be, that's the central bad of death. Not what it does for the person who dies. It's not bad for the person who dies. It's what it does for the rest of them, the rest of us. Now, I don't in any way want to belittle the importance of the pain and suffering that happen for the rest of us when somebody that we care about dies. Indeed, let me take a moment and read a poem that emphasizes this thought, because this is certainly one central, very bad thing about death. It robs us of our friends--we, the survivors--it robs us of our friends and loved ones. Poem. The poem is called Separation, by the German poet, Friedrich Gottlieb Klopstock. This is in one of the essays you'll be reading later in the semester by Walter Kaufmann--he quotes it--Death Without Dread. The poem, as I say, is Separation. You turned so serious when the corpse was carried past us; are you afraid of death? "Oh, not of that!" Of what are you afraid? "Of dying." I not even of that. "Then you're afraid of nothing?" Alas, I am afraid, afraid…"Heavens, of what?" Of parting from my friends. And not mine only, of their parting, too. That's why I turned more serious even than you did, deeper in the soul, when the corpse was carried past us [Kaufmann 1976] The poem is called Separation. According to Klopstock, the crucial badness of death is losing your friends. When they die, you lose them. And as I say, I don't in any way want to belittle the central badness of that. But I don't think it can be at the core in terms of what's bad about death. I don't think that can be the central fact about why death is bad. And to see this, let me tell you two stories. Compare them. Story number one. Your friend is about to go on the spaceship which is going to do the exploration of Jupiter or whatever. And they're going to be gone for years, years and years. It takes so long that by the time the spaceship comes back, 100 years will have gone by. Maybe it's not Jupiter. It's farther away. Worse still, after about 20 minutes after the ship takes off, all radio contact between ship and earth will be destroyed. It won't be possible, because of the speed. It's not going to Jupiter. It's going to some other planetary system. So, all possibility of communication will be destroyed. Now, this is horrible. You're losing your closest friend. You will no longer be able to talk to them, share the moments, get their insights and advice. You'll no longer be able to tell them about the things that have been going on. It's the same kind of separation that Klopstock was talking about. Horrible, and it's sad. That was story number one. Story number two, just like story number one, the spaceship takes off, and about 15 minutes later, it explodes in a horrible accident and everybody on the spaceship, including your friend, is killed. Now, I take it that story number two is worse. Something worse has taken place. Well, what's the worse thing? We've got of course the very same separation we had in story number one. I can't communicate in the future with my friend. They can't communicate with me. But we had that already in story number one. If there's something worse about story number two, and I think it's pretty clear there is something worse, it's not the separation. It's something about the fact that your friend has died. Now of course, this is worse for me, as somebody who cares about my friend, that he's died. But the explanation of what's bad for me, in his having died, is the fact that it's bad for him to have died. And the badness for him isn't just a matter of separation, because that we already had in number one. We couldn't communicate with him. He couldn't communicate with us. If we want to get at the central badness of death, it seems to me, we can't focus on the badness of separation, the badness for the survivors. We have to think about how is it, how could it be true, that death is bad for the person that dies? That's the central badness of death and that's the one I'm going to have us focus on. How could it be true that death is bad for the person that dies? That's the question we turn to next time. |
YaleCourses_Philosophy_of_Death | 14_What_matters_cont_The_nature_of_death_Part_I.txt | Professor Shelly Kagan: At the end of last class, I began to raise the question as to whether or not we should distinguish two questions that we would normally be inclined to run together. We've been asking ourselves, what does it take for me to survive, for me to continue to exist? But it's possible, I suggested, that we really shouldn't focus on the question, what does it take for me to survive? but rather, what is it that I care about? What is it that matters in survival?" Because it's possible, logically speaking, that there could be cases in which I survive, but I don't have what I normally have when I survive, and so I don't have what matters. I don't have what I wanted, when I wanted to survive. It could be that in the typical cases of survival I've got that extra thing. But we can think of cases in which I would survive, but I don't have that extra thing, and so I wouldn't have everything that matters to me. So as it were, we might say, it might be that mere survival or bare bones survival doesn't really give me what matters. What I want is survival plus something else. And I tried to motivate this question by having you think about perhaps the possibility, if the soul view was the truth about personal identity, but imagine a case of complete irreversible amnesia, while nonetheless, it's still your soul continuing. But the soul is going to then, having been scrubbed clean, get a brand new personality. A new set of memories, new set of desires, new set of beliefs. No chance of recalling your previous, current, personality. And when I think about that case, I find myself wanting to say, all right, I'll survive, but so what? I don't care. It doesn't matter that it's me, in that case. Because I don't just want it to be me, I want to have there be somebody that's me with my personality. Similarly, suppose we thought that the body view was the correct view and we imagine, again, some sort of case of complete amnesia. And so then we get a new personality and you say, "Oh look, that's going to be you, your body, your brain. You're still around." And I say, "It could be true, but so what?" It doesn't give me what I want, when I want to survive. What I want isn't just for it to be me. I want it to be me with my personality. So should we conclude, therefore, that what really matters is not just survival but having the same personality? Would that--Suppose the personality view of personality identity was correct. Would that then give us not just personal survival, but what matters? I think that's close, but no cigar. Not quite good enough. To see that, recall the fact that according to the personality view, as a theory of personality identity, the crucial point isn't that my personality stay identical. It's not that I have to keep all exactly the very same beliefs, desires, and memories. Because of course, if we said that, then I'd die as soon as I got a new belief. I'd die as soon as I forgot anything at all of what I was doing 20 minutes ago. No, according to the personality theory, what personal identity requires isn't item-for-item the same personality, but rather the same evolving personality. I gain new beliefs, new desires, new goals. I may lose some of my previous beliefs, lose some of my previous memories, but that's okay as long as it's a slowly-evolving personality with enough overlap. Okay, so now let's consider the following case. I start off. Here I am. I've got a set of beliefs, a set of--I believe I'm Shelly Kagan, a set of memories about growing up in Chicago. I have a certain set of desires about wanting to finish my book in philosophy and so forth. And I get older and older and older. And I get some new memories and some new desires and some new goals. Suppose that I get very, very, very old. I get 100 years old, 200 years old, 300 years old. Somewhere around 200, suppose that my friends give me a nickname. They call me Jo-Jo. Who knows why, they call me Jo-Jo. And after a while, somewhere the name spreads and by the time I'm 250 years old, everybody's calling me Jo-Jo. Nobody calls me Shelly anymore. And by the time I'm 300,350, 400, I've forgotten anybody used to call me Jo-Jo . And I no longer remember growing up in Chicago. I remember things about my youth when I was a lad of 100. But I can't go back to what it was like in the early days, just like you can't go back to what it was like to be four or three. And suppose that all this is going on as I'm getting older and older. My personality is changing in a variety of other ways. I lose my interest in philosophy and take up an interest in, I don't know, something that completely doesn't--organic chemistry holds no interest to me whatsoever. I become fascinated by the details of organic chemistry. And my values change. Now I'm a kind--now, over here--I'm a kind, compassionate, warm individual who cares about the downtrodden. But around 300, I say, "The downtrodden. Who needs them?" And by the time I'm 500, I become completely self-absorbed and I'm sort of a vicious, cruel, vile person. Here I am, 800 years old, 900 years old. Methuselah, in the Bible, lives for 969 years. He's the oldest person. So okay, here I am, 969 years old. I'm like Methuselah. Call this the Methuselah case. And the crucial point about the case is that we stipulate that at no point was there a dramatic change. It was all gradual, slow, evolving. In just the way it happens in real life. It's just that as Methuselah, I live a very, very, very long time. And by the end of it, and indeed, let's say somewhere around 600 or 700, I'm a completely different person, as we might put it. I don't mean literally. I mean in terms of my personality. Now, remember, according to the personality theory of personal identity, what makes it me is the fact that it's the same evolving personality. And I stipulated that it is the same evolving personality. So that's still me that's going to be around 600 years from now, 700 years from now. But when I think about that case, I say, "So what? Who cares?" When I think about that case, I say, "True, we'll just stipulate that will be me in 700 years. But it doesn't give me what I want. That person is so completely unlike me. He doesn't remember being Shelly Kagan. He doesn't remember growing up in Chicago. He doesn't remember my family. He has completely different interests and tastes and values." I say "It's me, but so what? It doesn't give me what I want. It doesn't give me what matters." When I think about what I want, it's not just that there be somebody at the tail end of an evolving personality. I want that person to be like me, not just be me. I want that person to be like me. And in the Methuselah case, I've stipulated, it ends up not being very much like me at all. So it doesn't give me what I want. When I think about what I want--and I'm just going to invite you to, each one of you, to ask yourself what is it that you want, what matters to you in survival?--when I think about what matters to me, it's not just survival. It's not just survival as part of the same ongoing personality. It's survival with a similar personality. Not identical, item for item, but close enough to be fairly similar to me. Give me that, and I've got what matters. Don't give me that, and I don't have what mattes. In fact, I'm inclined to go a little bit further. Once you give me that, give me that there's somebody there with my similar personality, I think that may be all that matters. Up to this moment, I've been saying, okay, survival by itself isn't good enough. You need survival plus something else. And I'm now suggesting that in my own case at least, the something else is, something extra, is same, similar personality. It might be that I get what matters to me even if I have, as long as I have, similar personality, even if I don't have survival. Suppose--I don't believe in souls, but suppose there really are souls. And suppose the soul is the key to personal identity. And suppose the thing that Locke was worried about really does happen. Every day at midnight God destroys the old soul and replaces it with a new soul that has the very same personality as the one before midnight, similar personality, same beliefs, desires, and so forth and so on. If I were to discover that's what was happening metaphysically and the soul view was the true theory of personal identity, I'd say, "Huh! Turns out I'm not going to survive tonight. I'm going to die. Who cares? There'll be somebody around tomorrow with my beliefs, my desires, my goals, my ambitions, my fears, my values. Good enough. I don't really care whether I'm going to survive. What I care about is whether there'll be somebody that's similar to me in the right way in terms of my personality." So it might be that the whole question we've been focusing on, "What does it take to survive?" may have turned out to be misguided. The real question may not be "What does it take to survive?" but "What matters?" And it might turn out that although, normally, having what matters goes hand in hand with surviving, logically speaking, they can come apart. And what matters, or so it seems to me, at least, isn't survival per se, but rather having the same personality. Since I'm inclined to think that the body view is the correct theory of personal identity, I want to say, look, somebody around tomorrow, if overnight God replaces my body with some identical looking body and keeps the personality the same, that won't be me, but all right. It's good enough. What matters to me isn't survival per se. Indeed, isn't survival, strictly, at all. It's having the same personality. Still, what does that leave us? That leaves us with the possibility that there could be cases where you die and you don't survive. Maybe God swoops me up upon death. My body dies, but he sort of swoops up my information about my personality and recreates somebody up in heaven with that similar personality. It won't be me, if it's a different soul. It won't be me, if it's a different body. But still, I want to say, it will give me what matters!" That's a possibility. But I don't, in fact, think it's going to happen. I believe--I've told you I'm a physicalist--I believe that what's going to happen is, at the death of my body, that's going to be the end. Now, what I've been arguing is that, logically speaking, even if you are a physicalist, that doesn't rule out the possibility of survival. Suppose you believe in the personality theory. Your body's going to die, but your personality could continue. Or it might be, even as a body theorist, I'll cease to exist but what matters will continue. These are possibilities. But for what it's worth, I don't in fact believe they're actually what's going to happen. Of course, these are also theological matters, and so I'm not trying to say anything here today to argue you out of the theological conviction that God will resurrect the body or God will transplant your personality into some new angel body, but if you believe in the personality theory, that will be you, or what have you. I'm not--it's not my goal here to argue for or against these theological possibilities, having at least taken the time to explain philosophically how we could make sense of them. But I do want to report that I don't believe them. I believe that when my body dies, that's it for me. There won't be anything that's me afterwards. There won't be anything that's--even though what I want per se isn't survival. Not only won't I survive, I believe after my death what matters to me in that situation won't continue either. There won't be somebody with a similar personality to mine after the death of my body. All right, so having spent all this time getting clearer about the nature of personal identity, and getting clearer about what people are, and the possibilities of survival, and so forth, having argued against the existence of souls, and for a physicalist view--physicalism seems compatible with both the body view and the personality view, leave it to you to decide between them, I myself currently favor the body view--let's ask, "So just what is death, anyway, on the physicalist view?" It might seem as though it's fairly straightforward. A person, after all, is just a body that's functioning in the right way so as to do these person tricks. It's P-functioning, as we've put it at one time or another. And so a person is just a P-functioning body, whether you emphasize the body side there or the personality side of that equation. What exactly is it to die? When do I die? Let's turn to that question. When do I die and what is death? Roughly speaking, the answer, presumably, on the physicalist view, is going to be something like--if I'm alive when we've got a P-functioning body, roughly speaking, I die when that stops happening, when the body breaks and it stops functioning properly. That seems, more or less, the right answer from the physicalist point of view, although as we'll see probably later today, we need to refine it somewhat. But first, let's ask a slightly different question. Which functions are crucial in defining the moment of death? After all, we've got the idea that here's the body, here's a functioning body. Here's one in front of you. Each one of you has got one. You're a functioning body. There's a variety of functions that your body's engaged in. Some of them have to do with merely digesting food and moving the body around, and making the heart beat, and the lungs open and close. Call those things the bodily functions. And there's also, of course, in each one of our cases, there's these higher mental cognitive functions that I've been calling the person functioning, there's the B-functions and there's the P-functions. Well, roughly speaking, I die when the functioning stops, but which functions? Is it the body functions or the personality functions? So let's take a look at the normal situation. Here's the existence of your body. And during most of the existence of your body, it's functioning. The body functions. Over here, it's no longer functioning. It's a corpse. During some of the period when your body's functioning, it's doing the higher cognitive stuff. The personality functions. Now, this is the very early stuff when your body's still developing and your brain hasn't turned on yet, or your brain is turned on, but it hasn't actually become a person yet, right? At least in the case of the fetus, it's not self-conscious. It's not rational. It's not able to communicate. It's not creative and so forth. That comes later. All right, so there's Phase A. There's Phase B. There's Phase C. That's the normal situation, the normal case. The body exists. It functions for a while before the P-functioning begins. And then after a while the body and P-functioning are both going on. And then after a while they stop. In the normal case, I'm in a car accident or whatever it is, and my body stops functioning, my personality stops functioning, and you're left with a corpse. When did I die? Well, the natural suggestion is to say I died here. I'll draw my little star, an asterisk. In the normal case, I die when my body stops functioning, in terms of the body functions. And it stops functioning in terms of the personality functions. That's the normal case. But we could still ask the philosophical question. Since what we had here was simultaneously losing both the ordinary body functioning and the special personality functioning, which loss was the crucial one in terms of defining the moment of my death? Let's come back to that question in a minute. First, I want to ask a slightly different question. When did I cease to exist? Or, to put it slightly differently, do I exist during Phase C, when the body has stopped functioning? Both in terms of body functions and personality functions, I'm just a corpse. Do I exist? Now, let's suppose we believe the personality theory of personal identity. According to the personality theory of personal identity, for something to be me, it's got to have the very same personality, the same evolving, but still the same set of beliefs, desires, goals, so forth. Now, during period C, there's nothing with my personality, right? Nobody thinks they're Shelly Kagan. Nobody has my memories, beliefs, exact desires, goals and so forth. Pretty clearly then, on the personality theory, I don't exist at Phase C. That's why it's natural to point to the moment of star when we say that's when my death occurs. I don't exist at Phase C. But interestingly, things look rather different if we accept not the personality theory, but instead, the body theory. After all, according to the body theory of personal identity, for somebody to be me, they've got to have my body. Follow the body. Same body, same person. All right, here we are. Here's my corpse. What is a corpse? It's a body, and indeed, my corpse is my body. So follow the body means follow the person. The corpse is still around. It means my body's still around. It means I'm still around. It's like, I mean, I'm dead, but I still exist. It's like a bad joke, right? So here's the question we started the class off with. Will you survive your death? Will you still exist after death? Well, there's good news and there's bad news. Since I believe in the body theory, the good news is, you will exist after your death. The bad news is, you'll be a corpse. That seems like a bad joke, but if the body theory is right, it's not a joke at all. It's literally speaking the truth. I will exist, at least for a while. Eventually, the body will decay, turn into atoms or whatever it is, decompose. At that point my body no longer exists. At that point, I will no longer exist. But at least for a while,during period C, the body theorist should say, "Yeah, you will exist. You will exist, but you won't be alive." It just reinforces the point that I was trying to make a few moments ago that the crucial question is not survival per se. The crucial question is, what did you want out of survival? And one of the things I wanted out of survival was to be alive. All right, so on the body view, I exist here, but I'm not alive, so it doesn't give me what matters. On the personality view, I don't exist when I'm a corpse. Let's go back and ask the question, well, so which is it? Which is the one that's the crucial for defining the moment of death, right? Even on the body view, the fact that I exist isn't good enough, because I'm not alive. I want to know, when am I alive? When am I dead? So what's crucial for defining the moment of death? Is it body functioning or personality functioning? Well, you can't tell by thinking about the normal case, because the B-functioning and the P-functioning stop at the same time. But suppose we draw the abnormal case. All right, here's C with the corpse again. Here's a period when the body's been functioning and goes like this. Here's the period back here, A, where the body's been functioning, but the personality hasn't started yet. And now imagine, so this is personality. Over here we've got body. We'll call this B again. What I've done is imagine a case in which the personality functioning stops before the rest of the body functioning stops. Obviously, the phases are no longer in alphabetical order, but I introduced D in the middle so the other phases could keep their same labels. Well, here's a case where--When does the body functioning stop? End of D. When does the personality functioning stop? End of B. So we've got two candidates. Star one and star two. Star one says death occurs when personality stops functioning. Star two says no, no, death occurs when bodily functioning stops. Well, again, the question is, what should we say? I think we're going to perhaps be drawn to different answers, depending on whether we accept the body view or the personality view. Suppose we accept the body view. Well, look, if the relevant question is "When do I die?" and I am a body, then presumably the straightforward answer at least is going to be "I die when my body stops functioning." When is that? Star two. During period D, I'm still alive, but I'm no longer functioning as a person. I am no longer a person. That's interesting. It's not just that I exist. In C, I can exist without being a corpse; or rather, without being alive, as a corpse. In D, I'm alive but I'm not a person. You recall when we talked about Plato, we introduced the notion of essential properties. And it seems that if we accept the body view, we have to say being a person is not an essential property of being something like me. It's not one of my essential properties that I'm a person. I am, in fact, a person, but that won't always be true of me. When I'm a corpse, I will cease to be a person, but I'll still exist. And if we have this unusual case in which my brain has a stroke, loses its higher cognitive functioning, so that the body continues to breathe, eat, respirate, and so forth, the heart continues to pump, but there's no longer anything capable of thinking, reasoning, we say, look, I still exist. Indeed, I'm alive, but I'm not a person. Being a person is something you can go through for a period of time and cease to be. In the same way that being a child is a phase you can go through for a period of time and then cease to be. Or being a professor is a phase you can go through and then cease to be. You can still exist without being a professor. I can still be alive without being a professor. Well, on the body view, we have to say the same thing about being a person. Being a person is something that I, namely my body, can do for a while. It wasn't doing it back here in A. It certainly won't be doing it in C. And it won't be doing it in D either. Being a person is something on the body view that I am only for part of my existence and indeed, only for part of my life. Well, that's what it seems we should say on the body view. What if instead we accept the personality theory? Then--actually, one more remark about the body view. Notice that if you accept this account of what the body view should say about when death is, my death is when I cease to be alive. I am my body. So my death occurs at star two, loss of bodily function. And being a person is just a phase. Notice that if we say that, then there's something somewhat misleading about the standard philosophical label for the problems we've been thinking about for the last couple of weeks. We've been worrying about the nature of personal identity. That is to say, what is it for somebody to be me. But notice that that label, "personal identity," "the problem of personal identity," seems to have built into it the assumption that whatever it is that's me is going to be a person. Is it the same person or not? Now, it turns out that that assumption, standardly built into the usual label, may be false. On the body view, it could still be me without being a person at all. So the problem of existence through time, or persistence through time, shouldn't be called the problem of personal identity, but just the problem of identity. You know, a footnote. Turning now again to the personality theory. If we accept the personality theory of personal identity, then for someone to be me, they've got to have the same personality. And so for something, for me to exist, my personality has to be around. Well, that's why we said up here that in Phase C when there's a corpse, I don't exist. There's nothing with my personality. As a corpse, I no longer exist. What should we say about Phase D, on the personality theory? Here, my body is functioning, but my personality has been destroyed. Nothing exists with my beliefs, memories, desires, fears, values, goals, ambitions. Well, if I just am my personality, then I don't exist in Phase D, because there's nothing there to be me, nothing with my personality. According to the personality theory, follow the personality. The personality ended at star one. So I don't exist at Phase D on the personality theory. Okay good. I don't exist. But what should we say? Am I alive or not? Well, my body's still alive. So should we say that I'm alive? After all, my body's still functioning until star two. During Phase D, my body seems to still be alive. Should we say that I'm alive? That's rather hard to believe, right? Think about what it would mean to say that. We'd being saying on the personality theory, I don't exist, but I'm alive. That seems like a very unpalatable combination of views. How can I be alive if I don't even exist? So it seems we have to say I'm not alive during Phase D. Not only don't I exist during Phase D, I'm not alive either. Yet, my body is alive; that's the whole stipulation. So it looks as though the personality theorist is going to have to introduce a distinction between my being alive, on the one hand, and my body being alive, on the other. In the normal case--up at the top, those two deaths occur simultaneously. My body stops being alive at the very same moment that I cease being alive. But in the abnormal case, the personality theorist needs to say, or so it seems to me, the two deaths come apart. The death of my body occurs at star two. My death occurs at star one. Notice that the body theorist didn't need to draw that distinction. Because if I just am my body, then well, I'm just my body. My death occurs at the death of my body. But still, even the body theorist needs a different distinction. We already learned, by thinking about the corpse case, that existence wasn't good enough for the body theorist. He wanted to be alive. And when I think about Phase D, I want to say something more. It's not good enough that I'm alive. I want to be a person. So what matters to me isn't just being alive, but being back here during Phase B. So then it needs something like the same distinction. Not, my death versus my body's death, but perhaps the death of the person, if we could talk that way, versus the death of the body. My death, for the body view, occurs with the death of my body. But in terms of what matters, it's the death of the person and that's star one, not star two. Now, I want to take just a couple of minutes and mention some other puzzles, or at least questions, worth thinking about in terms of the physicalist picture. I'm only going to point to them, rather than explore them. But I've been focused on the question about the end of life. We might ask as well, what about the beginning? What should we say about Phase A, when the body is turned on and functioning, developing, but the brain has not yet gotten to the stage at which it's turned on, or perhaps it hasn't yet become, well, it's not doing person functioning. It's not reasoning. It's not communicating. It's not thinking. It's not aware. It's not conscious. There's going to be some Phase A like that. What should we say about that phase? Do I exist during that phase or don't I? Well, on the body view, I suppose we should say I do exist. Being a person is a phase. We happen to have, in Phase A, the stage of my existence before I become a person. Of course, if we take the version of the body view that what I am, essentially--the crucial body part--is my brain, then we really would have to subdivide A into two parts: early A and late A. In very, very early A, the brain hasn't even developed yet. It hasn't been constructed yet. If I just am my brain, in effect, then early A, I don't exist yet. Not until late A, when the brain gets put together, that I start to exist. There is something there. It's my body, but it's not me, in early A. It seems sort of hard to believe, but maybe that's the right thing to say. In any event, the fans of the personality theory shouldn't be laughing too hard, because they're going to have to say something similar. Remember, if you accept the personality theory, follow the personality. Don't got the same personality? I cease to exist. That's why we said on the personality theory, as we went ahead in time, once the P-functioning stops, I don't exist anymore. That's what the personality theorist said. But we can raise that same point going backwards. When did I begin to exist on the personality theory? Not until my continuing, evolving through time personality started. And that certainly wasn't true way back at the start of A, as the fertilized egg first begins to split and multiply, subdivide and make organs. It's a good long time till any kind of mental processing occurs at all. So on the personality theory, I did not exist when that fertilized egg came into being, when the egg and the sperm joined. That's still not me, on the personality theory. Clearly, these issues are relevant for thinking about the morality of abortion. I'm not going to pursue them here, but you can see how they'd be relevant. If we want to worry about when, if ever, is an abortion justified, it might be worth getting clear on, when do creatures like us start? Interesting question, but having noted it, let me put it aside. Ah, question. Student: [inaudible] Professor Shelly Kagan: The question was: Would it be plausible to say that at the early phases of A, strictly speaking, the body's not functioning, because it's so utterly dependent on help from the mother's body. It needs the mother for respiration, for nutrients, and so forth and so on. That's a great question. And it's the sort of question and the reason why I said I wanted to glance in this direction without really going there. That's a nice example of it. We might wonder, just when should we say the body functioning really does start? How much independence does it take? We could draw yet another picture of a different way a life could come to an end. Imagine a body towards the end of life, on life support machinery. Do we want to say the body's functioning or not functioning? Well, hard cases there. So similarly, there's going to be hard cases about the very, very early stages. And although they're great questions and I'm happy to discuss them with you further, I don't want to pursue them here and now. I want to point to a different question that--I think it's a crucially important question. My unwillingness to discuss them isn't a matter of my judgment that they're unimportant, just trying to keep at least roughly on track. Come back to the end of life. Think some more about Phase D and ask. All right, so this is something that's--If the personality function's been destroyed, can't be recovered, can't be fixed, but the rest of the bodily functioning is still going on. The heart's pumping, the lungs are breathing and so forth. The body's able to digest food. There we are in Phase D, in something like, perhaps, persistent vegetative state. Now, imagine that we've got somebody who needs a heart transplant or a kidney transplant, liver transplant. And tissue compatibility tests reveal this body's compatible, suitable donor. Can we take it or not? Well, you might have thought we answer that by asking "Am I still alive?" Well, rip out the heart, it's going to kill me, right? So if I'm still alive, you can't do that sort of thing. It's killing me. Well, if we take the personality theory, we have to say, my body's still alive, but I'm not still alive. That's what we seem to want to say. If I'm not still alive, all we'd be killing isn't me, but my body. So now we have to ask, who or what has the right to life? Do I have the right to life, or does my body have the right to life? Or we might say, look, certainly I have a right to life. But is it also true that in addition to me, my body has a right to life? Is there something immoral about removing the organs during Phase D when the person is dead and the only thing that's still alive is the body? Don't be too quick to assume the answer that's got to be yeah, it's still wrong. After all, on the body view, I still exist when I'm a corpse. But of course, there's nothing wrong about taking my heart, even though I still exist. After all, I'm a corpse. Why not then say, similarly, even though my body's still alive, nothing wrong about removing the heart if the person is dead. At least, the personality view opens the door to saying that. What about the body view? On the body view, of course, I just am my body. I'm still alive. Now is it wrong? Well... Just like, with the body view, we wanted to say, "Being alive is not all it's cracked up to be," the real question is not, am I alive, on the body view? An interesting question is, "Am I still a person?" And indeed, although I'm alive on the body view, I'm not still a person. Maybe it's not so much that I have a right not to be killed. Maybe I have a right not to be depersonified, to have my personality destroyed. If that's the real right, then again, there'd be nothing wrong with removing the heart in D. Well, again, clearly, very, very important and very, very complicated questions. But having gestured toward them, I want to put them aside. Instead, I want to raise the following question. So look, what I've just been talking about for the last half hour or so is the fact that we've got to get clear, in thinking about the nature of death, as to whether or not the crucial moment is the moment when the personality functioning stops or the moment when the bodily functioning stops. As we saw by thinking about the abnormal case, these things can come apart and we can have Phase D. But in the normal case, they happen at the same moment. And I've drawn a lot of different distinctions about what would you say if you're a personality theorist to deal with this? What would you say if you're a body theorist to deal with this? Having drawn all those distinctions, I'm going to just ride roughshod over them and put them aside. And let's just suppose that we're dealing with the normal case, where the body functioning stops at the same time as the personality functioning stops. So what is death? What's the moment of death? What is it to die, on the physicalist view? Well, at first glance, you might think the answer is, look, you exist, you're alive, whatever it is--;as I said, I'm just going to be loose now, I'm going to put aside all the careful distinctions I just drew--I'm still around as long as my body is P-functioning. And when my body's not P-functioning, I'm not still around. Either I don't exist or I'm not alive or I'm not a person, whichever precise way we have to put it. That seems like the natural proposal for the physicalist to make. To be dead is to no longer be P-functioning. But that can't quite be right. Because imagine, don't just imagine, just remember what happened to you last night around 3:20 a.m. Let's just suppose that at 3:20 a.m. you were asleep and indeed, you weren't dreaming. You weren't thinking. You weren't reasoning. You weren't communicating. You weren't remembering. You weren't making plans. You weren't being creative. You were not engaged in P-functioning. If we take this simple straightforward view and say you're dead when you're not P-functioning anymore, then you were dead, on and off and on and off, last night. Well, that clearly doesn't seem to be the right thing to say. So we're going to have to revise the P-functioning or the end of P-functioning theory of death. We're going to have to revise that theory. We're going to have to refine it to deal with the obvious fact that you're not dead all the times when you're unconscious and not dreaming. But refining in just the right way is going to turn out to be a surprisingly not straightforward matter, at least that's how it seems to me. At any rate, that's the question we'll turn to next time. |
YaleCourses_Philosophy_of_Death | 11_Personal_identity_Part_II_The_body_theory_and_the_personality_theory.txt | Professor Shelly Kagan: Last time, we turned to the question of what the metaphysical key to personal identity might be. What makes it be the case that one person, some person that exists in the future, is the same person as me. The first approach to this that we considered was the soul theory of personal identity: the key to being the same person is having the same soul. Same soul, same person. Different soul, different person. And the difficulty with that approach, even if we bracket the question whether or not there are souls, the difficulty with that approach was that it seems as though the soul could constantly be changing while the personality, as we might call it, stays the same. I have the same beliefs, memories, desires, goals, preferences and so forth. But the soul underneath it all keeps being swapped every five minutes. If the soul theory of personal identity were right, that would not be me. I would be--Every five minutes that person would die and we'd have a new person, despite having the same personality. Most of us find that a rather difficult thing to believe, that the person could be constantly changing in this way, without having any way at all to tell. And if we're not willing to accept that implication, it seems as though we need to reject "the soul theory of personal identity." Now, I use this cumbersome phrase because, of course, I'm not here talking about rejecting the existence of souls. What I'm considering right now is the question whether sameness of soul is the key to being the same person. And this is a--There's a logical distinction here that's worth drawing. Even if you believe in souls, you don't have to say that having the very same soul is the key to being the very same person. And trivially, of course, if you don't believe in souls, if you don't believe that souls exist, that you certainly can't appeal to the existence of souls, the continuity of soul, the sameness of soul, as the key to personal identity. But we might then ask, "Well what's the alternative?" Now, the natural alternative is to say, "The key to being the same person is not the sameness of the soul, whether or not it exists, but rather having the very same body." And again, although I'm not going to go on and on about this point, it's worth noticing that even if you do believe that souls exist, nothing stops you from accepting the body theory of personal identity. Nothing rules out the possibility that having the very same body is the key to being the very same person over time. Even if you believe in souls, you can accept the body theory. And it certainly looks as though if you don't believe in souls, you have to accept the body theory of personal identity. Now, as it turns out, that appearance is deceptive. There are still other alternatives open to the physicalist, but let's come to that other alternative later. Let's take a few minutes and consider the nature of the body theory, the body theory of personal identity. On this theory, of course, the secret to being the same person is having the same body. So when we ask, well you remember last lecture I was talking about how there'd be somebody here lecturing to you, philosophy, on Tuesday. Well, here somebody is. Is that the same person? Is the person who's lecturing to you now the same person as the person who was lecturing to you before? According to the body theory, the answer is--turns on the question, "Well, is this the same body as the lump of flesh and bone that was here last week?" If it is--and by the by it is--if it is, then it's the same person. So am I the person who was lecturing to you last week? Yes, I am, because it's the very same body. That's what the body theory says. And unlike souls, where it's all rather mysterious how you could tell whether soul swapping was taking place or not, it's not all that mysterious how we check out to see whether the same body's been around. Even though you didn't do it, you could have snuck into my house, watched my body go to sleep, get up in the morning, followed the body around over the course of the day, see it go to sleep again. You could have tracked that body through space and time and said, "Hey look. It's the very same body." In the same way that we are able to track in principle cars, our earlier example, and talk about yeah, it's the same hunk of metal and wire and rubber and plastic. This is the same hunk, same body. All right, same body, same person. That's the body theory of personal identity. Now, if we accept the body theory, then of course if we turn to the question, "Could I survive my death?" Could I survive the death of my body?" at first glance, it looks as though the answer's going to have to be, "Well, of course not." Because when my body dies, then, oh eventually the body begins to decay. It decomposes, turns into molecules which get absorbed into the soil or what have you. This may take years or decades or even centuries, but my body no longer exists after death of my body. And so how could I survive the death of my body, if for me to survive the death of my body, there's got to be somebody who's me, and if being me requires it being the same body, my body would have to still be around, but it's not. That's what it looks like at first glance. But at second glance we see that there's at least a logical possibility of surviving the death of my body. All it takes is for my body to be put back together. Bodily resurrection. Now I'm not going to here pursue the question of, "Do we believe bodily resurrection occurs or will occur?" I'll note that there have been religious traditions that have taught and believed in this possibility. In particular, it's probably worth mentioning that early Christians believed in something like the body theory of personal identity and believed in bodily resurrection that would happen on Judgment Day. We can certainly understand the possibility that God would perform a miracle, put the molecules back together, turn the body back on. Same body, same person, come Judgment Day. That's the possibility. So it's at least worth emphasizing the fact that even if we don't believe in souls, we could still believe in the possibility of surviving one's death, the death of one's body, if we're willing to believe in bodily resurrection. Well, that's how it looks. Now let's take a harder look. Talking that way assumes that when you put the body back together, when God puts the body back together on Judgment Day, that that's still my body. Is that right? I'm inclined to think it is right. If God gathers up all the various molecules that had composed my body, reassembles them in the right order, putting this calcium molecule next to that hydrogen molecule and so forth and so on, reassembles them in the right way--obviously if what He makes out of my body's molecules is a Cadillac, then that's not my body--but if He puts them together in the right way, that seems like it should be my body. So here's an analogy to give you a sense of what's going on. Suppose I take my watch to the jeweler because it stopped working. And in order to clean it and fix it, repair it, what the jeweler does is he takes it apart. He takes the rust off of the gears, if there are still gears in watches. Imagine it's an old stop watch. And he cleans all the pieces and buffs them and polishes them and then reassembles the whole thing. And a week later, I come back and ask, "Where's my watch?" And he hands it to me. Well, all well and good. Now imagine some metaphysician saying, "Wait a minute, buster. Not so quick. That's not my watch. Admittedly, it's composed of all the very same pieces that made up my watch. Admittedly, all these pieces are in the very same order as my watch, but still that's not my watch." On the contrary, it seems to me the right thing to say about that example is, "No, that is my watch." My watch was disassembled for a period of time. Perhaps we should say my watch didn't exist during that period of time. But it got put back together. Now that's my watch. If that's the right thing to say about the watch--and it does seem to me to be the right thing to say about the watch--then God could presumably do the same thing on Judgment Day. He could take our molecules, which had been scattered, put them back together and say, "Ha! That's your body." And if the body theory of personal identity is right, well, that would be me. So it seems to me. But there's a different example that we have to worry about as well, which argues against this proposal that the body could decompose and then be recomposed. This is an example that's due to Peter van Inwagen. He's a contemporary metaphysician, teaches at Notre Dame. Suppose that my son builds a tower out of wooden blocks. We have a set of wooden blocks at home. Suppose that he builds some elaborate tower. It's very impressive. And he says, "Please show it to mom when she comes home." And he goes to bed. And I'm very good. I'm cleaning up the house after he goes to bed and oops, I knock over the tower. I say, "Oh my god, he's going to be so angry. I promised him I'd be careful." So what I do is I take the blocks and I put them back together, building a tower in the very same shape and the very same structure, the very same order as the tower that my son had built. And in fact I'm so careful--perhaps the blocks are numbered--I'm so careful that every block is in exactly the same position as in the case where my son built it. All right, I rebuild or I build this tower and my wife comes home and I say, "Look what our son built. This is the tower that our son built." Ah, that doesn't sound right. That's not the tower that our son built. That's a tower that I built. This is a duplicate tower. Sure, if my son were to wake up and I didn't tell him, he wouldn't know that it was a duplicate. But when you take a wooden block tower apart and then put the pieces back together, piece for piece, duplicate, you don't have the very same tower that you started out with. That's what van Inwagen says and, I've got to admit, sounds right to me. If I were to point to that tower and say, "Ari built that," I'd be saying something false. "That's the very same tower that Ari built." No, I'd be saying something false. So van Inwagen concludes, if you have an object and you take it apart and then put it all back together again, you don't have the very same object that you started out with. So even if Judgment Day were to come, and God were to reassemble the molecules and resurrect the body, it's not the very same body that you started out with. And if having the very same body is the key to personal identity, it's not the same person. Come Judgment Day, we've got a duplicate of me, but we don't have me. That's what van Inwagen would say, if that's the way bodily resurrection would work. I don't know, theology aside, I don't know what to say about the metaphysical questions. When I think about the tower case, I do find myself inclined to say, with van Inwagen, that's not the tower my son built. But when I think about the watch case, I find myself saying that is the very same watch. Now, all I can do is invite you to think about these two cases and ask yourself, what should we say here? Of course, for those people who think it really is the same tower, no problem. Then we say, the watch and the tower, in both cases, it's the very same object when it's reassembled. Reassemble the body, that'll be the very same body as well. For those people who say, "Yeah, van Inwagen was right about the tower, and the same thing would be true about the watch. The reassembled watch isn't the very same watch," then we have to say bodily resurrection would not be the very same body. So that wouldn't be me waking up on Judgment Day. The alternative is to try to find some relevant difference between the watch case and the tower case. Something that allows us to say that "well, when you reassemble the watch it is the same watch. When you reassemble the tower, it's not the same tower. Here's the explanation of why those two things work differently in the reassembly cases." And then of course, we'd have to further investigate whether when you reassemble a body, is it more like the watch case or is it more like the tower case? I just have to confess, I don't know what the best thing to say about these cases is. I find myself inclined to think reassembled watch, same watch. Reassembled tower, not same tower. Maybe there's a difference there. I don't have a good theory as to what the difference is. Since I don't have a good theory as to what the difference is, I'm not in a good position to decide whether a reassembled body would be the same body or a different body. I don't know. So there's metaphysical work to be done here by anybody who's at least interested in getting this theory of identity worked out properly. Still, at least the possibility that we could work this out is still there. So I suppose there's still at least the possibility that bodily resurrection would be coherent in such a way that it would still be the same body. So if we accept the body theory, could there be life after death? Could there be survival of the death of my body? Seems like, as far as I can tell, it's still a possibility, although there's some puzzles here that I don't know how to see my way through. Mind you, that's not to say that I myself do believe that there will be a Judgment Day, and on that day God will reassemble the bodies. But it at least seems like a coherent possibility. Let's refine the body view. I've been suggesting that the key here, the idea of whether it's the same person or not, is whether it's the same body. But of course as we know in thinking about familiar objects, we don't need to have every single piece of an object, of an entity, stay the same to have the same thing. So I think I previously talked about the steering wheel in my car. Every time I drive the steering wheel in my car, I rub off some atoms. But that's okay. It's still the very same physical object. The steering wheel is--Having the same steering wheel is compatible with changing of a few pieces. The same thing is true for bodies, right? You get sunburned, your skin peels, you've lost some atoms in your body. It doesn't really matter. It's still the very same body. So if body is the key to personal identity, we don't have to worry about the fact that we're constantly gaining and losing atoms. Yes, question? Student: What about someone who loses a huge amount of weight? Professor Shelly Kagan: Good. The question was, "What about somebody who loses a huge amount of weight?" They feel different. People treat them different. What about that case? Well, I think if we're doing metaphysics, as opposed to psychology--Psychologically, we understand why losing weight might make a real difference as to how you feel about yourself. And we might even say, loosely, it's as though she's a whole new person. But strictly speaking, we don't think it is literally a whole new person. It's not as though we say, "Poor Linda died when she entered the spa. Or a week into the spa when she dropped those 50 pounds. Somebody else who remembers all of Linda's childhood, some imitator came along." We don't say "different person." We say "same person, lost a lot of weight." Now that's not a problem for the body view, because on the body view, the question is, is it the same body? And what we want to say is, of course, look, just like it's still your body even if you break your arm. Even though--It's still your body after you've eaten dinner, and so now some molecules have been absorbed into your body that weren't there before. It's still your body after you lose some molecules, even a lot of molecules. There can be changes in your body that are compatible with it still being the same body. Now, we might worry about the--Which changes? Are all the changes, it's certainly not as though any change will do. I mean, suppose what happens is Linda goes to bed and what we do in the middle of the night is we take away that body and put some new body there. Well that 100% change, that's clearly too much. Change of some small percentage, from eating, not a problem. Change from a somewhat larger percentage of losing a fair bit of weight doesn't seem to be a problem. So which changes in bodies make for a different body and which changes in body make for the same body? And in particular, how should we run that if we're thinking about the body as the key to personal identity? I think if we have that question in front of our minds, we're going to want to say not all parts of the body are equally important. You lose a fair bit of weight, some fat from your gut, not a problem. Here's one of my favorite examples. In the Star Wars movies, Darth Vader whips out his light saber and slashes off the hand of Luke Skywalker. "Luke, I am your father." "No!" Then the hand goes, right? The very next scene--this has always amazed me--the very next scene, Luke's got an artificial hand that's been attached to his body and they never even mention it again. No one says, "Oh, poor Luke. He died when Darth Vader cut off the hand." It seems pretty clear that not all parts of the body matter. You can lose a hand and still survive. Same body, except now without a hand. Suppose Darth Vader had aimed a little higher and cut off Luke's entire arm. It would still be Luke. It would still be Luke's body. Suppose, even worse, Darth Vader slices off both arms and both legs. It would still be Luke. It would still be Luke's body, though now without arms and legs. What part of the body, if any, is essential? Well here's a proposal. It seems to me we'd say something rather different if what happened was that what got destroyed was Luke's brain. Suppose that Darth Vader uses the force--the dark side of the force of course--Darth Vader uses the dark side of the force to destroy, to turn into pea soup, Luke Skywalker's brain. Now I think we might want to say, "Well look, no more Luke." And if what happens is they drag out some replacement brain, it's still not Luke. At least, that's a possible version of the body view. According to this version, which I take to be the most promising, the best version of the body view, the crucial question in thinking about personal identity is whether it's the same body--but not all parts of the body matter equally. The most important part of the body is the brain. Well, why the brain? No surprise there, because of course the brain is the part, we now know, the brain is the part of the body that is the house of your personality, your beliefs, your desires, your fears, your ambitions, your goals, your memories. That's all housed in the brain. And so that's the part of the brain that's the key part of the body for the purpose of personal identity. That's what I'm inclined to think is the best version of the body view. We find examples of this thought, that the brain is the key, in odd places. So let me actually share one with you. This was something from the Internet that my brother sent to me some years ago. It purports to be from a transcript from an actual trial in which a lawyer's cross examining the doctor. And you'll see. I don't actually know whether it's true or not, whether it's just somebody made it up. But it purports to be true. Q: Doctor, before you performed the autopsy, did you check for a pulse? A: No. Q: Did you check for blood pressure? A: No. Q: Did you check for breathing? A: No. Q: So then it is possible that the patient was alive when you began the autopsy? A: No. Q: How can you be so sure, doctor? A: Because his brain was sitting on my desk in a jar. Q: But could the patient have still been alive nevertheless? A: It is possible that he could have been alive and practicing law somewhere. The point--The reason that this is funny, other than of course the obvious moral, which is that lawyers are morons, is that of course. Why is it so clear the lawyer's got to be a moron? Because of course we think, look, lose a hand, the guy could still be alive. Lose an arm, lose a leg. Lose the brain, he's not alive. So again this is, this is hardly philosophical proof, but it shows that we're drawn to the thought that the key part of the body is the brain. Now, think about what the implication of holding that view. Suppose we adopt that version of the body view. If I get a liver transplant, so here I am and we take out my liver and we put Jones' liver inside. I've gotten a liver transplant. It's still me. Suppose we rip out my heart and put Jones' heart in here. I've gotten a heart transplant. It's still me. Suppose we rip out my lungs and put in Jones' lungs. I've gotten a lung transplant. It's still me. Suppose we rip out my brain, put in Jones' brain. Have I gotten a brain transplant? No. What's happened is that Jones has gotten a body transplant. Or, as we might put it, a torso transplant. If we accept this version of the body theory, we say the crucial part of the body for personal identity is not sameness of torso. The crucial part of the body is sameness of brain. Just like "follow the soul" was the answer if we believe in the soul theory of personal identity, if we believe in the brain version of the body theory of personal identity, same person or not? Follow the brain. Same brain, same person. Different brain, different person. As I've now been saying several times, I think that's the best version of the body view, although not all body theorists believe that. As you know from reading your Perry, the assigned reading, his Dialogue on Personal Identity and Immortality, the heroine of that story, Gertrude--Gertrude actually thinks the key part of the body is the torso. Follow the torso, follow the person. That's what she thinks. I'm inclined to say, no. In those moods, when I accept the body theory, I'm inclined to think, no, follow the brain. Gertrude would presumably say you get a brain transplant, you got a brain transplant, because it's the same torso. I want to say, as a fan of the brain theory, you get a brain transplant, what's really happened is somebody else has gotten a torso transplant. Follow the brain. How much of the brain? Do we need all of the brain? Well, just like we didn't have to follow the parts of the body that aren't essential for housing the personality, we might ask ourselves, "Do we need all of the brain to house the personality?" Research suggests that there's a fair bit of redundancy in the brain. You can lose portions of the brain and still have a perfectly functioning, P-functioning person. Some of you may know that there have been experiments in which, for one reason or the other, the two halves of the brain have been separated. And you often end up there with, well, something closer to two persons being housed within one skull, because they can often still communicate in various ways. We don't quite get that. I gather that the best research suggests we don't really have complete redundancy with hemispheres. But suppose that we did. Let's be science-fictiony. Suppose that, as a kind of backup security, what evolution has done is produced so much redundancy in the brain that either half of the brain would suffice. All right, so think about our brain transplant example. So there's an accident with Jones and Smith. Jones' torso gets destroyed. His brain is fine. Smith's brain has gotten destroyed. His torso is fine. We take Jones' brain; we put it in Smith's torso. We hook up all the wires, as it were. The thing wakes up. Who is that? Jones' brain, Smith's torso. Follow the brain. That's Jones that woke up. Version two. Horrible accident. Jones' torso has been destroyed and the left half of his brain has been destroyed. But the right half of his brain is still there. Smith's torso is fine, but his entire brain has been destroyed. We take the right half of Jones' brain, put it into Smith's torso, hook up all the wires the right way, the thing wakes up. Who is it? It's Jones. Follow the brain, and more particularly, follow however much of the brain it takes to have enough of the brain there to still give you the memories, beliefs, desires, and so forth and so on. If it were true--it probably isn't true, but if it were true--that half of the brain was enough, then half the brain would be enough. That would be Jones that woke up. Question? Student: [inaudible] Professor Shelly Kagan: Great. The question was, "On this theory, what do we say about the case where we take the two halves of Jones' brain, split them, put them in two different torsos. They both wake up. Would they both be Jones?" That's a wonderful question. It's a wonderful case to think about and, indeed, I am going to come back to it. But I just want to bracket it for the time being. But it's a great question to keep in mind as you think about the plausibility of the body theory. All right, so I'm inclined to think that the best version of the body theory has to do with following the brain. So one thing that a physicalist, who does not believe in souls, one thing that a physicalist could say is, "What's the key to personal identity? The body. Sameness of body." And then I'm inclined to think the best version of the body view is the brain view. So that's something that a physicalist can say. And for that matter, it's something that a soul, somebody who believes in souls, could say as well: even though there are souls, that may not be the key to personal identity. Maybe sameness of body is the key to personal identity. That's something a physicalist or dualist can say. But, and this is not--to make good on a promissory note I offered earlier, it's not the only view available to physicalists or, for that matter, dualists. Even if there are no souls, we don't have to say that the key to personal identity is the sameness of the body. We could instead say the key to personal identity is the sameness of the personality. After all, go back to the Lockean worries about the soul theory of personal identity. It seemed very hard to believe that it isn't the same person when the memories and beliefs and desires and goals and ambitions and fears are all the same, even if a soul is constantly changing. It seems as though we wanted to say same person. Why? Roughly speaking, because it's the same personality. And with the body view, when I started arguing a few moments ago that the best version of the body view was the brain view, why did that seem plausible? Why didn't we say that Luke died when he lost his wrist? Because the brain, after all, was the part of the body that houses the personality. Enough of the brain was good enough, I said. What counts as good enough? Enough to keep the personality. Well, if what we think is really important here is the personality, why don't we just say the key to personal identity is the personality? Let's just say it's me, provided that there's somebody who's got the same set of beliefs, desires, goals, memories, ambitions, fears. To coin a word, the same "personality." So the secret to personal identity on this new proposal isn't sameness of body, it's sameness of personality. Now, it's important to bear in mind that this view is perfectly compatible with being a physicalist. After all, we're not saying that in order to have personalities you need to have something nonphysical. As physicalists, we can still say that the basis of personality is that there are bodies that are functioning in certain ways. But for all that, the key to the same person could have to do with the personality rather than the sameness of bodies. Of course, normally the way you get the same personality is by having the same body. Still, if we ask, "What's doing the metaphysical work here? What's the key to being the same person?" we can say sameness of body gave us the same personality, but it was sameness of personality that made it be the very same person. Could there be some way to get sameness of personality while not having sameness of body? Maybe. Suppose that we had some disease. The doctor tells me the horrible news that I'm going to have some disease that's going to eventually turn my brain into pea soup. But luckily, just before it does it, they can take all of my personality and put it into an artificial replacement brain. So there'll be --just like you can have artificial hearts, artificial livers, you can have artificial brains, which will get imprinted with the same personality. Same memory, same beliefs, same desires, same fears, same goals. We obviously can't do that. This is a science fiction story. But at least it allows you to see how the body and the personality could come apart. And so we could have same personality without literally the same brain. If personality is the key to personal identity, that would still be me. Hold off again for a few minutes, at least, on the question, "So what should we believe here, the body, the personality view?" Let's try to refine the personality theory. So again, the point I was just emphasizing was even if we accept the personality theory, this doesn't threaten our being physicalists. We can still say the reason that we've got the same personality in the normal case, is there's some physical explanation of what houses the personality. But for all that, the key to personal identity is same personality. Notice, by the way, that somebody who believes in souls could also accept the personality theory of personal identity. Locke believed in souls. He just didn't think they were the key to personal identity. So you might think, "Oh no. The physicalist is wrong when the physicalist says that personality--memory, belief, consciousness, what have you--is housed or based in the body. It's based in an immaterial soul." Dualists could say that. And yet, for all that, the dualist could consistently say, "Still, same soul is not the key to personal identity. Same personality is the key to personal identity. If God replaces my soul every 10 minutes, as long as He does it in such a way as to imprint the very same personality on the soul, it doesn't matter any more than it didn't matter whether or not some of my body parts were changing." So the personality theory of personal identity can be accepted by physicalists and it can be accepted by dualists. So, just to keep score, right now we've got three basic theories of personal identity on the table. The soul theory, the key to personal identity is the same soul. The body theory, the key to personal identity is the same body. Where the best version, I think, is the brain version of the body theory. And the personality theory, the key to personal identity is having the very same personality. Well again, we've got to be careful about refining this. Just like we all agreed, I suppose, that you can have the very same body, even though some of the parts come and go, atoms get added, other atoms get knocked off. We can say, we'd better say, that you can have the very same personality even if some of the elements in your personality change. After all, we defined the personality in terms of it being a set of beliefs and memories and desires and goals and fears and so forth. But those things are constantly changing. I have all sorts of memories now that I didn't have when I was 10. I have memories of getting married, for example. I wasn't married when I was 10. So does the personality theorist have to say, "Uh-oh, different personality. That kid no longer exists. That person died, got married and the memories died." If we say that, we have very, very short lives. Because after all, right now I've got some memories that I didn't have two hours ago. I have some memories I didn't have 20 minutes ago. If every time you got a new memory you had a new personality and the personality theory said having the very same personality was the key to survival, then none of us survive more than a few seconds. Well, the answer presumably is going to be that the best version of the personality theory doesn't require item for item having the very same beliefs, memories, desires, and so forth. But instead requires enough gradual overlap. Your personality can change and evolve over time. So here I am as a 10 year old child. I've got certain desires, certain memories. As the year goes by, I get some new memories. I lose some of my goals. I no longer--When I was 10, when I grew up I wanted to be a trash collector. That was my first chosen profession. At some point I gave up that desire. I didn't want to be a trash collector anymore. I wanted to be, I kid you not, I wanted to be a logician when I was a teenager. I wanted to study symbolic logic. So at a certain point I gave that up. So my memories, my desires were changing, but they all changed gradually. I lost some old memories. I don't remember everything I knew or remembered when I was 10. When I was 10, I had pretty vivid memories of kindergarten. Now I have very sketchy memories of kindergarten. Still, it wasn't abrupt. It was gradual. There was this slow evolution of the personality. And so when the personality theorist says the key to personal identity is the same personality, they don't' mean literally the very same set of beliefs and desires. They mean, rather, the same slowly evolving personality. Here's an analogy. Suppose I had a rope that stretched from that end of the room all the way across to this end of the room. Very same rope at that end as this end. What makes up a rope? Well as you know, ropes are basically bundles of fibers, very thin fibers that have been woven together in a certain way. But the interesting thing is the fibers themselves aren't actually all that long. They might be a couple of inches or at most a foot or so. And so no single fiber stretches all the way across the room. Or even if some fibers did, most of the fibers don't. Does that force us to say, "Ah, so it's not the very same rope at the end as at the beginning"? No. We don't have to say that at all. What we want to say is, "It's the same rope as long as there's this pattern of overlapping fibers." Certain fibers end, but most of the fibers are continuing. Some new fibers get introduced. They continue for a while. Eventually maybe those fibers end, but some new fibers have been introduced in the meantime. As long as it's not abrupt. Imagine I take my scissors and cut out a foot in the middle. Then we'd say there isn't the right kind of pattern of overlap and continuity. Now we really do have two ropes--one rope here, one rope there. But if, in contrast, there is the right kind of pattern of overlap and continuity, same rope, even though, even if no single fiber makes it all the way across. Something analogous needs to be said by the personality theorist. Even if I have few or no memories identical to the ones that I had when I was 10, that's okay. We can still say it's the same personality, the same evolving personality, so long as there's a pattern of overlap and continuity. New memories get added, some memories get lost. New goals get added, some goals get lost. New beliefs get added, some beliefs get lost. There might be few beliefs, desires, goals that made it all the way through. But as long as there's the right kind of overlap and continuity, same personality. All right, so what have we got? Three views--soul view, body view, personality view. Three rival theories about the key to personal identity. Now, which of these is right? Well, I don't myself believe in souls, it's hardly going to surprise you to learn that I don't think the soul theory of personal identity is right. For me, the choice boils down to the choice between the body theory of personal identity and the personality theory of personal identity. Of course, in real life, they go hand in hand. In ordinary cases at least, same body, same personality. Both theories are going to say it's the very same person. And if you believe in souls, you are likely to think, same soul as well. In ordinary cases, you have the same soul, same body, same personality, same person. To think about which one of these is the key to personal identity, we need to think about cases, maybe somewhat fantastical, science-fictiony, in which they come apart. Cases in which bodies and personalities go their own ways, as it were. So that's what I'm going to do. I'm going to tell you a story in which your body ends up one place and your personality ends up someplace else. And I'm going to invite you to think about which of these two resulting end products is me. If you could figure out which one's you, that would tell you whether you think the body theory is the right theory or the personality theory is the right theory. Now, what's going to be our guide? I'm going to, rather gruesomely--not in real life, a science fiction story--I'm going to torture one of the two end products. I'm going to ask you, "Which one do you want to be tortured?" Or to put the point more properly, which one do you want to not be tortured? Because I'm going to assume, I'm going to take it, that it's important to you that you not be tortured. So by seeing who you want to keep safe, this will help you see which one you think is you. Of course, I've got to be sure that you're thinking about this in the right way. Like some of you are probably good, moral individuals and you don't want anybody to be tortured. I say, "Ah, I'm about to torture Linda over there." You say, "No, no. Don't torture Linda." Still, if I were to say to you, "I'm about to torture you." You'd say, "No, no! Don't torture me!" and there'd be some extra little something when you said that, right? So I want to invite you to keep that extra little something in mind when we tell the stories, which we won't get to until next time, when we tell the stories next time, and I say, "Okay, who do you want to be tortured, this person or that person?" The question is, from that special egoistic perspective that we're all familiar with, which is the one you really care about? That's going to be our guide to deciding what's the key to personal identity. But to hear the stories, you've got to come back next lecture. |
YaleCourses_Philosophy_of_Death | 8_Plato_Part_III_Arguments_for_the_immortality_of_the_soul_cont.txt | Professor Shelly Kagan: We've been looking at Plato's arguments for the immortality of the soul, and so far I have to say I haven't found them very compelling arguments. In a minute, I'm going to turn to an argument that at least strikes me as more interesting. It's more difficult to pin down where it goes wrong. But before we do, I want to make a last couple of comments about the argument we were considering at the end of last class. That was the argument from recollection. You recall the basic idea was that although objects in the ordinary familiar empirical world are not perfectly just, perfectly round, what have you, they're able to remind us of perfect justice, perfect roundness and the like. And when Plato asked himself, "How could that be?" the answer he gives is, "Well, it's got to be that we were previously acquainted with the forms before our life in this world." And that shows that the soul must be something that existed prior to the creation of the body. That's the argument from recollection. And at the very end of class I suggested that, look, even if we were to grant to Plato that in order to think about justice, circularity, what have you, we had to somehow grasp the forms, and even if we were to grant to Plato that nothing in this world is perfectly round or perfectly just, it's not necessarily correct to say, "So the only possible explanation of what's going on is that these things in the empirical world remind us of our prior acquaintance with the forms." It could be that what goes on is, when we bump up against something that's partially just or partially beautiful or partially round--imperfectly round--what happens is, those things sort of trigger our minds in such a way that we begin to think about the forms for the very first time. So it might be, in order to think about justice and roundness, we have to grasp the forms. But it could be that we only grasp the forms in this life, for the very first time. Exposure to the things that participate in the forms may nudge our minds or our souls in such a way that at that point--given that exposure--we begin to grasp the forms. It's as though the ordinary earthly objects, we bump into them or they bump into us, and they get us to look upwards to the heavenly Platonic realm. I don't mean literally upwards. It's not as though these things--the number three--is up there. But if you accept the metaphor, running into things in the empirical world gets our minds to start thinking about, for the first time, the heavenly realm of the Platonic forms and ideas. That would be just as likely a possibility as the alternative explanation that what's going on is that ordinary empirical objects are reminding us of our prior acquaintance. Perhaps these ordinary objects act like letters of introduction, getting us to, helping us to, think about the forms for the very first time. Well, if that's right, then of course, we don't have any good reason to follow Plato when he says, "It must be the case that the soul existed prior to the--prior to birth." Now, the objection I've just raised is not an objection that Plato raises in the Phaedo, but he does raise a different objection. Remember our concern isn't, strictly speaking, with the question, "Did the soul exist before our birth? Did the soul exist before our bodies?" but rather, "Is the soul immortal?" And so, having now given the argument from recollection, Plato envisions two of Socrates' disciples, Simmias and Cebes, responding, objecting, by saying, "Look, even if the soul existed before birth, it doesn't follow that it exists after death. And that's, after all, what we really want, are wondering about. We want to know, will we survive our deaths? Is the soul immortal? And you haven't yet shown that Socrates," they object. Could be that it existed before, but won't exist afterwards. But very nicely--it's quite elegant structure at this point--Socrates puts together the two arguments that we've just been rehearsing--the argument from recollection and the argument that came before that, the one that I dubbed "the argument from recycling." Remember, the argument from recycling says, when you build something, you build it out of parts, and when that thing falls apart you go back to the parts. All right. So the prior parts get recycled. The soul, we now say--based on the argument from recollection--the soul is one of our prior parts. The soul existed before we were put together, or before we were put together with our bodies. If you then combine the argument from recycling and say, the parts that existed before are going to exist afterwards, it must follow that if the soul existed before, it will exist afterwards as well. And so we've got the immortality of the soul after all. Now, bracket the fact that, as I just explained, I don't myself find the argument from recollection persuasive. I don't think we've got any good reason to believe--based on the sort of things that Plato is drawing our attention to--I don't think we've got any good reason to believe that the soul existed before we were born. But even if we grant him that, we shouldn't be so quick to conclude, on the basis of combining the argument from recollection and the argument from recycling, that the soul will continue to exist after the death of our bodies. After all, take a more familiar, humdrum example. Cars are built out of non-cars, right? Cars get built out of engines and tires and steering wheels. And the engine is not a car; the steering wheel is not a car. So you build the car out of its parts. Now, the engine is a prior-existing part. So can we conclude then that from the fact that--argument from recycling: parts get reused, get rebuilt, when cars get destroyed, the parts are still around--can we conclude from the argument from recycling and the fact that the engine is a prior-existing part from which the car was built, that the engine will continue to exist forever after the destruction of the car? No, obviously you can't conclude that at all. Sometimes when cars get destroyed the engine gets destroyed right along with it. And of course, even if--in many cases--the engine continues to exist for a while after the destruction of the car, it certainly doesn't follow that the engine is immortal, that it continues to exist forever. Engines will eventually decompose and turn back into atoms. So from the mere fact that the engine was a part that existed before the car existed, and the further fact that when the car breaks down, it decomposes back into parts, it certainly doesn't follow that all of the parts that existed prior to the existence of the car will be around forever. That would just be false. So even if we were to give Socrates the assumption that--the thesis that--the soul existed before we were put together, before we were born, it still wouldn't follow that the soul will continue to exist after we're taken back apart. The soul might eventually decay just like the engine will eventually decay. What we need, to really become convinced of the immortality of the soul, is not the mere suggestion, even it was--even if we were convinced--is not the mere suggestion that the soul was around before our birth. We need to believe that the soul, unlike an engine, can't itself be destroyed, can't itself decompose, can't fall apart. That's what we need if we're really going to become convinced of the immortality of the soul. Now, as I remarked previously, one of the amazing things--not amazing but one of the really attractive things about Plato's dialogues is, you raise an objection and it often seems as though Plato himself, whether or not he explicitly states the objection, seems aware of the objection, because he'll go on to say something that is responsive to it. And again, that makes sense if you think of these dialogues as a kind of pedagogical tool to help you get better at philosophizing. So the very next argument that Plato turns to can be viewed, I think, as responding to this unstated objection--well, I stated it, but Plato doesn't state it in the dialogue--the worry that even if the soul was one of the parts, even if the soul was already around before we were born, how do we know it can't come apart? How do we know the soul can't be destroyed? Since what we want to know is whether the soul is immortal, how do we know it can't break? Plato's next argument then tries to deal directly with this worry, and it's a quite interesting argument. I'll give it another--a new label--I'll call it the "argument from simplicity." Socrates turns to a discussion of what kinds of things can break and what kinds of things can't break; what kinds of things can be destroyed, and what kinds of things can't be destroyed. He thinks about examples; he surveys examples and tries to extract a kind of metaphysical principle from this. And then, as we'll see, he's going to use this principle to convince us--or to try to convince us--that the soul is immortal, it's indestructible. Well, lots of things can be destroyed. Here's a piece of paper. It can be destroyed Right? Why was it that this was the sort of thing that could be destroyed? Well, the straightforward answer is the piece of paper had parts. And in breaking it, in ripping it, what I literally did was I ripped one part from another. To destroy the piece of paper, I take its parts apart. Here's piece of chalk. The piece of chalk can be broken. What am I doing? Taking its parts apart. The kinds of things that can be destroyed have parts. They are composite. They are composed of their parts. Bodies can be destroyed because you can take a sword to it and go sweep, sweep, sweep and chop it into pieces. Composite things can be destroyed. Things that have parts can be destroyed. Now, what kind of things can't be destroyed? Well, it won't surprise you that when Plato looks for an example of something that's eternal and indestructible, his mind immediately starts thinking about the Platonic forms. Take the number three. The number three can't be destroyed, right? Even if nuclear explosion took place and everything on Earth got atomized and destroyed through some bizarre science fiction chain reaction, like they're always doing in movies, the number three wouldn't be touched. The number three wouldn't be fazed. It would still be true that three plus one equals four. You can't hurt the number three. You can't alter or destroy perfect circularity. Why not? Well, it doesn't have any parts. That's the thought. Things like the Platonic forms are eternal, and they're eternal, changeless, and indestructible, because they are simple--simple here being the metaphysical notion that they're not composed of anything. Anything that's built up out of parts you could, at least in principle, worry about the parts coming apart and, hence, the thing being destroyed. But anything that's simple can't be destroyed in that way. It has no parts to take apart. So the kinds of things that can be destroyed are the things with parts, and those are the sorts of things that change, right? Even if they're not destroyed, what's a tip off to something being composite? The fact that it changes. Suppose I take a bar of metal and I bend it. I haven't destroyed it, but I've changed it. I'm able to change it by rearranging the relationships between the various parts. My body is constantly changing because the relationships between my arms and my head and so forth, my muscles are moving. You rearrange the parts, the thing changes. Oh, but that means it's got parts and could be destroyed. So we've got some nice generalizations. Things that change have parts; things with parts can be destroyed. What are the kinds of things that you can change and destroy? Those are the familiar empirical objects that we can see: pieces of paper, bodies, pieces of chalk, bars of metal. In contrast, on the whole other side, you've got things that are invisible, like the number three--nobody sees the number three--things that are invisible, that never change. The number three never changes, right? The number three is an odd number. It's not as though, oh, today it's odd but maybe tomorrow it'll be even. It's eternally an odd number. Three plus one equals four today, yesterday and forever. These facts about the number three will never change. The number three is changeless. So the forms are eternal; they're invisible; they are changeless. They're simple, and simple things can't be destroyed; forms can't be destroyed. You put all this together; these are the sorts of thoughts that Socrates assembles, and I've got the initial thoughts up there on the board. All right. So premise number one, only composite things can be destroyed. Premise number two, only changing things are composite. So if you put one and two together, you'd get: only changing things could be destroyed. And now add three, invisible things don't change. Well, if you've got to be the kind of thing that can change in order to be composite and you've got to be composite in order to be destroyed, invisible things don't change, follows four, invisible things can't be destroyed. That's the metaphysical thesis that Socrates comes to by thinking about cases. And that's the crucial premise or sub-conclusion for the immortality of the soul, because then Socrates invites us to think about the soul. Is the soul visible or invisible? He says, pretty obviously, "it's invisible." But if invisible things can't be destroyed, the soul can't be destroyed. So one, two and three got us four, invisible things can't be destroyed, but five, the soul is invisible so six, the soul can't be destroyed. That's my best attempt at reconstructing the argument from simplicity. It's not as though Plato himself spells it out with premises and conclusions like that, but I think this is fairly faithful to the kind of argument he means to put forward. And in a moment I'll turn to evaluating whether that's a good argument or not. But I think it's a pretty interesting argument; it's an argument worth taking fairly seriously. Except, I've got to confess to you that Socrates doesn't quite conclude the way I would've thought he would've concluded. So I've had the argument conclude six, the soul can't be destroyed. But what Socrates actually says is--his actual conclusion is--"And so the soul is indestructible or nearly so." That's rather an odd qualification, "or nearly so." The conclusion that Socrates reaches from his examination of change and invisibility and so forth and so on, and compositeness versus simplicity, is that "the soul is indestructible or nearly so." Now, adding that qualification opens the door to a worry. The worry gets raised by Cebes who says, even if we grant that the soul is nearly indestructible, that's not good enough to get us immortality. And he gives a very nice analogy of somebody who's--a coat, which could outlast the owner but isn't immortal. Or the owner could go through several coats; but still at some point the owner's going to die as well. The owner is far more immortal, in that sense, closer to immortality. And I've gone through many coats in my life, but for all that, I'm not indestructible. If all we've got is the mere fact that the soul is "nearly" indestructible, it takes a whole lot more work to destroy it, maybe it lasts a whole lot longer; maybe it goes through a whole lot of bodies being reincarnated a half-dozen, or a dozen, or hundred times before it wears out and gets destroyed. That's not enough to give us the immortality of the soul. That's the objection that Cebes raises. And one of the oddities is that, as far as I can see, Socrates never responds to that objection. Raises the objection--that is, Plato raises the objection in the voice of Cebes--but Socrates, on Plato's behalf, never answers the objection. It's hard to say what exactly is going on. It might be that Plato's worried that he hasn't really shown that the soul is immortal afterwards. Maybe this argument from simplicity isn't really as good as it needs to be. And maybe that explains why Plato then goes on to offer yet another argument. After all, if this argument really did show the immortality of the soul, why would he need to offer a further argument?--The argument from essential properties, which we'll be turning to later. So maybe Plato just thought there wasn't a good answer to Cebes' objection. But I want to say, on Plato's behalf, or at least on behalf of the argument, Socrates should never have concluded the argument with this odd qualifying phrase that the soul is "indestructible or nearly so." He should've just said the soul is indestructible, full stop. After all, if we have premises one, two, and three--only composite things can be destroyed, only changing things are composite, invisible things don't change--if you put those together, you get four, invisible things can't be destroyed. You don't get the more modest conclusion, "invisible things can't be destroyed or it's a whole lot harder to destroy them." If we've got one, two and three, we're entitled to the bold conclusion: "invisible things can't be destroyed, period." Full stop. And then if five is true, if the soul really is invisible, we're entitled to conclude six, the soul can't be destroyed--not, the soul can't be destroyed, or if it can be destroyed it's very, very hard and takes a very, very long time. We are, rather, entitled to the bolder conclusion, the soul can't be destroyed, full stop, period, end of the discussion. So despite the fact that Socrates draws this weaker conclusion, it seems to me that the argument he's offered us, if it works at all, entitles us to draw the bolder conclusion. Not that the soul is indestructible or nearly so, but that the soul is indestructible. Well, maybe Plato realized that; maybe that's the reason why he doesn't bother giving an answer to Cebes. Maybe it's an invitation to the reader to recognize that there's a better argument here than even the characters in the drama have noticed--don't know, don't know what Plato had in mind. But at any rate, our question shouldn't be, "What was Plato thinking?" but, "Is the argument any good?" Do we now have an argument for the immortality of the soul? After all, if the soul can't be destroyed, it's immortal. Is it a good argument or not? Simmias raises a different objection. Simmias says we can't conclude that the soul is indestructible, or nearly so, or whatever, because we should not believe the sub-conclusion four, invisible things can't be destroyed. Simmias says invisible things can be destroyed. And if that's true, then of course we no longer have an argument for the indestructibility or near indestructibility of the soul. Because even if the soul is invisible, five, if nonetheless, contrary to what Socrates was claiming, invisible things can be destroyed, then maybe the invisible soul can be destroyed as well. Now, Simmias doesn't merely assert, boldly, invisible things can be destroyed. He offers an example of an invisible thing that can be destroyed--harmony. He starts talking about the harmony that gets produced by a stringed instrument; let's say a harp. In fact, he says, this is a very nice example for us to think about because some people have suggested--Simmias says--some people have suggested that the mind is like harmony. It's as though the mind is like harmony of the body. So to spell out the analogy a bit more fully, and I'll say a bit more about it later, harmony is to the harp as the mind is to the body. All right. He says, there are people who put forward views like this, and at any rate harmony can certainly be destroyed. You don't see harmony, right? Harmony is invisible. But for all that, you can destroy harmony. So there's the harp making its melodious, harmonious sounds, and then you take an ax to the harp, bang, bang, bang, chop, chop, chop, or a hammer or whatever; now the harmony's been destroyed. So even though it's invisible, you can destroy it by destroying the musical instrument on which it depends. And of course, there's the worry, right? If the mind is like the harmony of the body, then maybe you could destroy the mind, the soul, by destroying the body on which the mind depends. So the crucial point right now is that thinking about harmony is offered as a counterexample to the generalization that invisible things can't be destroyed. Harmony is invisible. Harmony can be destroyed. So invisible things can be destroyed. So you're wrong, Socrates, when you say invisible things can't be destroyed. So even if we grant that the soul is invisible as well, maybe the soul also is an invisible thing that can be destroyed. That's a great objection. It's an objection worth taking very seriously. And the oddity is, Socrates doesn't respond to it in the way that he should have, in the way that he needed to. Socrates instead spends some time worrying about the question, "Is the soul really like harmony or not?" Is this metaphor--think about the relationship between the mind and body as similar to the relationship between harmony and a harp--Socrates spends some time criticizing that analogy. Now, in a few minutes I'll turn to the question, what about Socrates' criticisms of the analogy? Are they good criticisms or not? But even if they are good criticisms, I want to say, that's not good enough to help your argument Socrates. Even if we were to say, you know what? The mind isn't very much like harmony at all. That analogy really stinks. So what? All that Simmias needs to cause problems for Socrates' argument is the claim that harmony is invisible and harmony can be destroyed. As long as that is true, we can't continue to believe that invisible things can't be destroyed. So what Socrates needs to do is to say either harmony can't be destroyed, but pretty obviously it can, the melodious sounds coming out of an instrument can be destroyed. So he would need to argue then, perhaps, that harmony is not really invisible. If he could show us, if he could convince us, that harmony is not really invisible, then we would no longer have a counterexample to the claim that the invisible can't be destroyed, and the argument could still then proceed as it was before. So that's what Socrates should have done. He should have said, "You know what? Harmony is not really invisible," or "It can't be destroyed." But there's not a whiff of that, at least in the dialogue as we've got it, not a whiff of that as far as I can see. Socrates never says, "Simmias, here is where your objection goes wrong. Harmony is not really invisible, can't really be destroyed, whatever it is. So we don't really have a counterexample." Instead, he gets hung up on this question, "Is it a good analogy? Is it a good way for thinking about the mind or not?" But even if it isn't, that wouldn't save the argument. Now, I am going to take some time to think about whether or not harmony is a good analogy, because I actually think it is a good analogy. I think what's going on in the harmony--the suggestion that we should think about the mind like harmony, as though it was the harmony of the body--is an early attempt to state the physicalist view. Talk about the mind, says the physicalist, is just a way of talking about the body. Or, more carefully, it's a way of talking about certain things the body can do when it's functioning properly, when it's well tuned, as we might put it. Just like, talk about the harmony or the melodious sounds or what have you of the harp, is a way--these things are a way of talking about what things the harp can do. It can produce melodious, harmonious sounds when it's functioning properly, when it's well tuned. So the harmony analogy is, I think, an attempt, and not a bad attempt, at gesturing towards the question, how do physicalists think about the mind? Now, when I tried to get you to grasp how physicalists think about the mind, I used examples about computers and robots and the like. Well, it's not remotely surprising that Plato doesn't use those kinds of analogies. He doesn't have computers; he doesn't have robots. Still, he has physical objects that can do things. And the ability to do things depends on the proper functioning of the physical object. And so, I think he can see that there's this alternative to his dualism. He can see you could be a physicalist and say that the mind is dependent on the body; the mind is just a way of talking about what the body can do when it's working properly. It's dependent just the same way that, well, for example, harmony is dependent upon the physical instrument. So I think it's a very nice attempt to discuss the physicalist alternative to Plato's dualism. And that's why it will be worth taking some time to ask ourselves, well, what about Plato's objections then? If he can convince us that the soul is not like harmony of the body, maybe that will be some sort of problem for the physicalists. So I'll come back to that in a few more minutes. But first, let's worry about the point that I was emphasizing earlier, namely, even if the soul's not very much like harmony, so what? If harmony really is invisible and harmony really can be destroyed, then invisible things can be destroyed. Even if the soul's nothing like--that's not a good analogy for thinking about the physicalist position or what have you--so what? If some invisible things can be destroyed and harmony is an example of that, then, by golly, it's going to follow that we can't conclude from the invisibility of the soul that the soul cannot be destroyed. So even though Socrates doesn't respond to that objection, we need to ask on Socrates' behalf, is there a possible answer to this objection? And I think there are at least the beginnings of one. We have to ask: when we say, "invisible things can't be destroyed," what did we mean by "invisible?" And I want to distinguish three different possible interpretations, three different claims. So invisible means, one, there's one possibility, can't be seen. Two, different possibility, can't be observed. I've got in mind the broader notion of all five senses. Three, different possible interpretation of invisible, can't be detected. What we have to ask ourselves is, when Socrates puts his argument forward, which of these did he have in mind? First, let's be clear on how these things are different. Some things can't be seen but can be sensed some other way. So colors can be seen; smells cannot be seen, but of course smells--the smell of coffee--can be sensed through the five senses. Sounds can't be seen, they're not visible, but for all that they can be sensed. You can hear them through your ears. So, without getting hung up on what does the English word "invisible" mean, let's just notice that there's a difference between saying "it can't be seen through the eyes" and "can't be observed through one sense or the other." And then three is a different notion altogether, a stronger notion altogether. There might be things that can't even be detected through any of the five senses. The number three--not only can't I see it, I can't taste it, I can't hear it, I can't smell it, can't touch it, right? The number three is invisible in this much bolder way. It can't be detected at all by the five senses--can't be detected in terms of its--it doesn't leave traces behind, right? I don't see dinosaurs, but of course they leave traces behind in fossils. There's a way in which you can talk about it being detected by its effects. All right. So again, don't get hung up on what does the English word invisible mean. Let's just ask ourselves, what notion of invisibility--if we'll use the word between these three ways--what notion did Socrates' argument turn on? Well, the most natural way to start by interpreting him is with number one. When he says, "Invisible things don't change," what he means is, things that you can't see don't change, and so--continue to interpret invisible in number four the same way--invisible things can't be destroyed. On the first interpretation what he'd be saying is, "If you can't see it with your eyes, it can't be destroyed." Now, the trouble is, harmony shows that that's not so. Harmony is indeed invisible in sense number one. You cannot see it with your eyes. But for all that, it can be destroyed. So if what Socrates means by invisibility is the first notion, can't been seen with your eyes, then the argument's not any good. Harmony is a pretty compelling counterexample. But maybe that's not what Socrates means by invisible. Maybe instead of one, he means two. When he talks about the soul being invisible and invisible things being indestructible, maybe he means things that can't be observed through any of your five senses. Now, in point of fact, I think that is what he meant. Let me just give a quick quote. In our edition, this is page 29. Some of you may have noticed that there are little standardized paginations in our edition as well. So it's in the academy paginations, number 79; he's talking about the difference between the visible and the invisible things, chairs versus the forms. And he says, "These latter, chairs, trees, stones, you could touch and see and perceive with the other senses. But those that always remain the same, the forms, can only be grasped by the reasoning power of the mind. They are not seen but are invisible." So I think it's pretty clear that when Socrates starts talking about what's visible versus invisible, he doesn't mean to limit himself to vision; he means to be talking about all of the five senses. So when we say--when he says--"Invisible things can't be destroyed," he means the things that you can't see or touch or hear or feel--whatever it is--see, touch, smell, taste. Those things can't be destroyed. Now, notice that if that's the way we interpret his argument, harmony no longer works as a counterexample. Harmony was invisible when we meant definition number one, can't be seen. But it's not invisible if we mean definition number two, can't be sensed, can't be observed. Harmony can be sensed through the ears, in which case it's not a counterexample. It's not a counterexample to four. Four says, "Invisible things can't be destroyed." And what Socrates should have said is, harmony is not invisible in the relevant sense of invisible, since it can be sensed. But--and this would be the crucial point--notice, Socrates should've continued, the soul is invisible in that sense. You don't see the soul; you don't taste the soul; you don't touch the soul; you don't hear the soul. So if we understand the argument in terms of the second interpretation of invisible, it looks as though the argument still goes through. Simmias' counterexample fails. Harmony is not invisible in the relevant sense, so it could still be true that invisible things can't be destroyed. Since the soul is invisible in that sense, it would follow that the soul can't be destroyed. However, even if Simmias' objection, his particular counterexample, harmony, fails, that doesn't mean that we should still accept the argument because there might be a different counterexample. So here's my proposal. Suppose we think not about harmony but radio waves. Radio waves are not sensible. They are not observable. You don't see a radio wave. You can't touch a radio wave; you can't smell a radio wave, and interestingly enough, you can't hear radio waves. But of course, for all that, they can be destroyed. So even if we grant that what Socrates meant by invisible was "cannot be observed," we still have to say, with Simmias, "You know, four is just not true. Some invisible things can be destroyed." Radio waves can be destroyed even though they're invisible in the relevant sense. Yeah? Question? Student: [inaudible] Professor Shelly Kagan: Okay. So the suggestion was, radio waves are a bit like the forms. Student: [inaudible] Professor Shelly Kagan: They're not forms, but they're perfect in that way. Was that the thought? Student: [inaudible] Professor Shelly Kagan: Ah! Okay, I misunderstood. So the question is rather, "Look, radio waves are not like forms," to which the answer is "Yes, that's exactly the problem." They are invisible, like the forms, but unlike the forms they're destructible. And that's precisely why we've got to worry about the soul. Is the soul invisible in the way the forms are, being indestructible, or is it invisible in the way that radio waves are, destructible? Now again, my point here is not to say, "Oh, you idiot, Plato! Why didn't you think of radio waves?" Our question is not, was Plato overlooking something he should've thought of? It's, does his argument work or not? Is it true that the invisible things can't be destroyed? And it seems to me that some things that are invisible in the relevant sense, radio waves being an example of that, can be destroyed. So even though the soul is also invisible in the relevant sense, maybe it can be destroyed as well. Now, the answer, it seems to me, the only answer I can imagine Socrates or Plato giving at this point, is to say, "Look, I need a different definition of invisible. Not two, but three. Don't talk about what we can sense; talk about what we can detect." Radio waves can be detected, right? After all, radios do that. You turn on your radio, the radio wave's passing by, boom--properly tuned, you detect it. It turns it into these sounds that we can hear. We can detect radio waves on the basis of their effects on radios, among other things. So maybe by invisible he should've moved to this stronger, bolder definition of invisible. Let's call something invisible not only if it can't seen, not only if it can't be observed, but if it can't be detected at all. Look, the forms, after all, can't be detected. There's no radio for the number three that will tell--There's no Geiger counter to tell you the number three is nearby or something, right? So Plato could still insist things that are invisible, in the sense of undetectable, can't be destroyed. But radio waves, they're detectable. So they're not a counterexample, now that we interpret the relevant notion of invisibility as undetectability. So couldn't Plato continue to claim, things that are fully invisible, meaning undetectable, those things can't be destroyed. Radio waves aren't a counterexample to that. I think maybe Plato could say that. But, if we give him four, where we read invisible as meaning utterly undetectable, it's no longer so clear to me that we can give him five. Is the soul invisible? Well, it was, when by invisibility we meant can't be seen; it was, when by invisibility we meant can't be tasted or touched or heard or smelled. But is it still invisible if by invisibility we mean can't be detected? Is it true that the soul can't be detected? I've got to say, I think it's no longer right. Once we interpret invisibility that way, the soul is detectable in just the way--not literally just the way, but in something similar to the way--that radio waves are detectable. If you hook a radio wave up with a radio, you can tell the radio is--radio wave--was there because of what the radio's doing, giving off these sounds. If you hook a soul up to a body, you can tell the soul is there by what the body is doing, discussing philosophy with you. You detect the presence of your friend's soul through its effects on your friend's body. But that means the soul isn't really undetectable. But if the soul's not really undetectable, it's not really invisible in the relevant sense. And if it's not really invisible, then even if there is a notion of invisible, such that things that are invisible in that sense can't be destroyed, the soul's not invisible in that sense. I've gone over this argument at such length because--I hope it's clear--I think it's a pretty interesting argument. The argument from simplicity is quite fascinating. The idea that you couldn't break the soul if it didn't have parts, and the way to tell that it doesn't have parts is because it's invisible, because invisible things can't have parts, that's a quite difficult argument to pin down, does it work or does it not work. But I think, as we think it all through, we have to conclude it doesn't work. Okay. |
YaleCourses_Philosophy_of_Death | 1_Course_introduction.txt | Professor Shelly Kagan: All right, so this is Philosophy 176. The class is on death. My name is Shelly Kagan. The very first thing I want to do is to invite you to call me Shelly. That is, if we meet on the street, you come talk to me during office hours, you ask some question; Shelly's the name that I respond to. I will, eventually, respond to Professor Kagan, but the synapses take a bit longer for that. It's not the name I immediately recognize. I have found that over the years, fewer and fewer students feel comfortable calling me Shelly. When I was young, it seemed to work. Now I'm gray and august. But if you're comfortable with it, it's the name that I prefer to be called by. Now, as I say, this is a class on death. But it's a philosophy class, and what that means is that the set of topics that we're going to be talking about in this class are not identical to the topics that other classes on death might try to cover. So the first thing I want to do is say something about the things we won't be talking about that you might reasonably expect or hope that a class on death would talk about, so that if this is not the class you were looking for, you still have time to go check out some other class. So here are some things that a class on death could cover that we won't talk about. What I primarily have in mind are sort of psychological and sociological questions about the nature of death, or the phenomenon of death. So, a class on death might well have a discussion of the process of dying and coming to reconcile yourself with the fact that you're going to die. Some of you may know about Elisabeth Kübler-Ross' discussion of the so-called five stages of dying. There's denial, and then there's anger, and then there's bargaining. I actually don't remember the five stages. We're not going to talk about that. Similarly, we're not going to talk about the funeral industry in America and how it rips off people, which it does, in their moments of grief and weakness and overcharges them for the various things that it offers. We're not going to talk about that. We're not going to talk about the process of grieving or bereavement. We're not going to talk about sociological attitudes that we have towards the dying in our culture and how we tend to try to keep the dying hidden from the rest of us. These are all perfectly important topics, but they're not, as I say, topics that we're going to be talking about in this class. So what will we talk about? Well, the things we'll talk about are philosophical questions that arise as we begin to think about the nature of death. Like this. In broad scope, the first half of the class is going to be metaphysics, for those of you who are familiar with the philosophical piece of jargon. And roughly, the second half of the class is going to be value theory. So, the first half of the class is going to be concerned with questions about the nature of death. What happens when we die? Indeed, to get at that question, the first thing we're going to have to think about is what are we? What kind of an entity is a person? In particular, do we have souls, and for this class when I talk about a soul, what I'm going to mean is sort of a bit of philosophical jargon. I'm going to mean something immaterial, something distinct from our bodies. Do we have immaterial souls, something that might survive the death of our body? And if not, what does that imply about the nature of death? What kind of an event is death? What is it for me to survive? What would it mean for me to survive my death? What does it mean for me to survive tonight? That is, you know, somebody's going to be here lecturing to the class on Thursday, presumably that will be me. What is it for that person who's there on Thursday to be the same person as the person who's sitting here lecturing to you today? These are questions about the nature of personal identity. Pretty clearly, to think about death and continued existence and survival, we have to get clear about the nature of personal identity. These sorts of questions will occupy us for roughly the first half of the semester. And then we'll turn to value questions. If death is the end, is death bad? Now, of course, most of us are immediately and strongly inclined to think that death is bad. But there are a set of philosophical puzzles about how death could be bad. To sort of give you a quick taste, if after my death I won't exist, how could anything be bad for me? How could anything be bad for something that doesn't exist? So how could death be bad? So it's not that the result is going to be that I'm going to try to convince you that death isn't bad, but it takes actually a little bit of work to pin down precisely what is it about death that's bad and how can it be death? Is there more than one thing about death that makes it bad? We'll turn to questions like that. If death is bad, then one might wonder would immortality be a good thing? That's a question that we'll think about. Or, more generally, we'll worry about how should the fact that I'm going to die affect the way I live? What should my attitude be towards my mortality? Should I be afraid of death, for example? Should I despair at the fact that I'm going to die? Finally, we'll turn to questions about suicide. Many of us think that given the valuable and precious thing that life is, suicide makes no sense. You're throwing away the only life you're ever going to have. And so we'll end the semester by thinking about questions along the lines of the rationality and morality of suicide. So roughly speaking, that's where we're going. First half of the class, metaphysics; second half of the class, value theory. Next thing I need to explain is this. There's, roughly speaking, two ways to do a class, especially an introductory class like this. In approach number one, you simply lay out the various positions, pro and con, and the professor strives to remain neutral; sort of not tip his hand about what he holds. That's approach number one. And sometimes in my intro classes that's the approach that I take. But the other approach, and the one that I should warn you I'm going to take this semester, in this class, is rather different. There's a line that I'm going to be developing, pushing, if you will, or defending in this class. That is to say, there's a certain set of views I hold about the issues that we'll be discussing. And what I'm going to try to do in this class is argue for those views. Try to convince you that those views are correct. To help you know sort of ahead of time quickly what those views are, I want to start by describing a set of views that many of you probably believe. So I'm going to give you a cluster of views. Logically speaking, you could believe some of these things and not all of them. But here's a set of views that many of you probably believe, and I imagine most of you believe at least some of these things. So here's the set of common views. First of all, that we have a soul. That is to say we are not just bodies. We're not just lumps of bone and flesh. But there's a part of us, perhaps the essential part of us, that is something more than the physical, the spiritual, immaterial part of us, which as I say in this class we'll call a soul. Most of us, most of you, probably believe in souls. Certainly most people in America believe in some sort of immaterial soul. And given this existence of this immaterial soul, it's a possibility, indeed a fair likelihood, that we will survive our deaths. The death will be the destruction of my body, but my soul is immaterial and so my soul can continue to exist after my death. And whether or not you actually believe in a soul, you hope that there's a soul so that there'll be this serious possibility of surviving your death because death is not only bad, but so horrible that what we would like to have happen is, we would like to live forever. And so, armed with a soul, as it were, there's at least the possibility of immortality. Immortality would be wonderful. That's what we hope is the case, whether or not we know that it's the case. Immortality would be wonderful. That's why death's so bad. It robs us of immortality. And if there is no soul, if death is the end, if there is no immortality, this is such an overwhelmingly bad thing that the only, the obvious reaction, the natural reaction, the universal reaction, is to face the prospect of death with fear and despair. And as I mentioned earlier then, death is so horrible and life is so wonderful that it could never make sense to throw it away. So suicide is both immoral on the one hand and never makes sense. It's always irrational as well, in addition. That, as I say, is I think a common set of views about the nature of death. And what I'm going to be doing, what I'm going to be arguing in this class, is that that set of views is pretty much mistaken from beginning to end. And so I'm going to try to convince you that there is no soul. Immortality would not be a good thing. Fear of death isn't actually an appropriate response to death. Suicide, under certain circumstances, might be rationally and morally justified. As I say, the common picture is pretty much mistaken from start to end. That's at least my goal. That's my aim. That's what I'm going to be doing. Now, since of course, I believe the views I believe--and I hope at the end of the semester you'll agree with me, because I think they're true and I hope you'll end up believing the truth [laughter]. But I should say that the crucial point isn't for you to agree with me. The crucial point is for you to think for yourself. And so what I'm really doing is inviting you to take a good, cold, hard look at death, and to face it and think about it in a way that most of us don't do. If you, at the end of the semester, haven't agreed with me about this particular claim or that particular claim, so be it. I'll be content--I won't be completely content--but I'll be at least largely content as long as you've really thought through the arguments on each side of these various issues. Karen, maybe this would be a good time for you to pass around the syllabus. Next introductory remark: A lot of today's talk is going to be devoted to business. I'll get to, if time permits, some philosophy at the end. I want to make one more remark about what I'll be doing in terms of this class. This class, as I say, is a philosophy class. We'll basically be sitting here thinking about what we can know or make sense of with regard to death using our reasoning capacity. We'll be trying to think about death from a rational standpoint. One kind of evidence or one kind of argument that we won't be making use of here is appeal to religious authority. So some of you may believe in, for example, the existence of an afterlife. You may believe you're going to survive your death. You may believe in immortality because that's what your church teaches you. And that's fine. It's not my purpose or intention here to try to argue you out of your religious beliefs or to argue against your religious beliefs. All I'm going to ask is that we not appeal to such religious arguments, appeal to revelation or the authority of the Bible, or what have you, in the course of this argument. In the course of this class. If you want to, you could think of this class as one big hypothetical. What conclusions would we come to about the nature of death if we had to think about it from a secular perspective? Making use of only our own reasoning, as opposed to whatever answers we might be given by divine revealed authority. Those of you who believe in divine revealed authority, that's a debate for another day. It's not a debate that we're going to be engaged in here in this semester. Similarly, although I'm not going to ask you in your discussion sections to hide your religious views, you'll be asked in the course of defending them, to give reasons that would make sense to all of us. That's by way of sort of where the class is going. Let me now turn to some discussion about the requirements of the class, grades and so forth and so on. The syllabus is going around the class. Almost all of you have it at this point. The syllabus doesn't really say a whole lot. I've already given you an overview of what topics we'll be going to. The crucial point about the syllabus is that it indicates what reading you need to have done for any given week. Now, I've done my best to peg the readings to where I will be on that week's lecture, but I don't lecture with lecture notes, for the most part. Sometimes I take a little bit longer than I anticipated. Actually, I often take a little bit longer than I anticipated. No doubt at some point I'll fall behind. At some point I may rush to catch up ahead. It won't always be the case that the readings will exactly coincide with where the lectures are at. Nonetheless, in any given week, for the start of that week, you should have done the readings that are listed for that week. The readings on the syllabus simply say the author, and there are a couple of books that are available at the bookstore. There are a larger packet of readings that's available as a course pack at Tyco's . And so for any given week you can find the reading. One or two cases, maybe just one actually, where I've got more than one article by the given author, I've given the title of the article as well. It shouldn't be difficult to locate the reading for any given week. The format of the class, of course, is a familiar and straightforward one. I'll be sitting here lecturing twice a week, this time, 10:30 to 11:20. Once a week you will break up into discussion sections. The discussion sections will meet for 50 minutes. Each one of you will have a single time. But it'll be different times the discussion sections meet. For the first time, the philosophy department has just switched over to the online discussion section registration system. I'm not 100% certain how that works. I've not used it before. I take it the idea is something like this. Right now, if you were to shop the class, you could find the tentative list of discussion section days and times. So be sure to find some time that works for you. You can't actually register for any of those discussion section times yet. But as of, I think, next week when you're able to begin your online registration, you will be able to register for any discussion section that still has a slot, still has a space open in it. In fact, you won't be able to finalize your registration for your courses until you've actually signed up for an available slot. Once you have registered, if some other slots become available that weren't previously available, I gather you'll be sent some sort of email by the system, in case some other time would be better for you. You can put yourself on waiting lists and so forth. It sounds pretty good on paper. Maybe it'll all work smoothly. I've never been through it before. I hope we won't have any problems. Right now what you want to make sure is that there is a time that's available--right now all the times are available--but that there is a time that works for you. Because if you can't find a discussion section that works for you, you won't be able to take this course. Any questions about that? I should actually ask, any questions about anything that I've asked or said so far, up to this point? Let me make a remark about questions, which is--today's mostly business. Hopefully, it'll be fairly straightforward. But both today and throughout the entire semester, as I'm lecturing I want to invite you to jump in with questions. Well, jump in is a bit of an exaggeration. I don't want you to just start talking, but raise your hand. If I'm saying something that you don't understand, the chances are pretty good that there's 25 or 50 other students in the class who don't understand it either. I'm just not being clear. So I want to welcome you, I really want to invite you, whenever you've got some reactions to the things that I'm saying, raise your hand, I'll call on you. Say, "Shelly, I didn't really understand what you were saying about the soul. Could you please explain that again?" Or, for that matter, if you've got some quick reactions or thoughts or responses to the arguments that I'm laying out and you want to share them with the class as a whole, then very much I want to invite you to do this. Now this class is too big for us to have some close, intimate conversation between the 150-180, however many students there are here. That's not going to happen. But the chance for detailed discussion in the discussion section, that's where that should happen. But still, there is the chance for brief reactions and definitely a chance for questions. I very much want to invite you to do that. So, if at any point you've got something you want to ask about or some two bits you want to add, raise your hand, wiggle it around, make sure I see you. I may want to finish the particular point that I'm making, but I'll try to come back to you and I'll then raise your question. And if I remember at least, I will repeat the question out loud so that everybody can hear it. I also want to say that I will try to have the practice of, after class ends, if you want to continue the discussion, you have some questions that occurred to you towards the end, we didn't have a chance to share them with the class as a whole, I will, on a normal day, meet outside and continue to talk with however many of you want to do that until you're done. I just love talking about this stuff and I welcome you to come to my office hours. I invite you to ask questions in class or, if you prefer, after class as well. Again, any questions about any of that? Yeah. Student: [inaudible] Professor Shelly Kagan: When are my office hours? That's a great question and I don't know the answer to it. I haven't planned them yet. On Thursday, start the class by asking me that and I'll give you an answer. All right. Other bits of business. I should say something about grades. Now many of you may have heard, many of you may know, and if you don't already know this, I should warn you, that I have a reputation around Yale as being a harsh grader. I know this is true, that is, I know I have the reputation, both because I periodically in my student evaluations get told I'm one of Yale's harsher graders, and because every now and then the Yale Daily News will have an article about grade inflation and they'll always ask me, "Well Professor Kagan is somebody..." Once there was a story on grade inflation that the Yale Daily News began by saying, "As Shelly Kagan (known at Yale as one of the hardest graders)." So I know I've got at least the reputation of being a hard grader. I don't actually know whether it's deserved or not, because Yale does not publish information about what the grading averages are. At other schools I've taught at there's been information along the lines of well the typical grade in an introductory course in the humanities is such and such. Shortly after I came here to Yale, and I started realizing that people thought I was a harder grader than most other Yale professors, I called the administration and asked, "Do you have this sort of information?" The answer is "Yes." "Will you give it to me?" The answer was "No." They don't share this information with the Yale faculty. Seems odd. The explanation, of course, actually isn't that hard to come by. The worry is that those of us who are harder graders than average, if the information were published, would feel guilty and sort of ease up on our grading. But those who are easier graders than average will never feel guilty and toughen up. So the result would be a constant push up with the grades. At any rate, I don't know for certainty that I'm a harder grader, but I believe that it's the case based on reactions I get when I give the speech that I'm about to give. Okay, so [laughter]. When I open the blue book, the Yale guideline, the Yale catalog, it's got a page, as you all know, where it says what letter grades mean at Yale. I didn't actually bring it this year. Sometimes I do, but I've got it pretty much memorized. It says, for example, next to each letter grade what it means. B, for example, means good. A means excellent, C means satisfactory, D is passing, F is failing. B, let's start with B. B means good. Now the crucial question then is what does good mean? I take good to mean good. Consequently, [laughter] if you were to write a good paper for me, that would get a B. And when you get a B from me--now, I say me, this is the royal me. Because I won't actually be grading your papers. Your papers will be graded by a small army of TAs. But they will grade under my supervision, and in keeping with the standards that I ask them to grade with. So when you're pissed off about your grade, the person to take it up with--well, take it up with them. But eventually you'll want to take it up with me. So when you get a B from us, B doesn't mean what a piece of crap. B means good job! And so you should be pleased to get a B, because it meant you were doing good work and it's not easy to do good work in philosophy. A means excellent. Now excellent does not mean publishable. Excellent does not mean you are God's gift to philosophy [laughter]. So it's crucial to understand it doesn't mean that the only way you're going to get an A is to be God's gift to philosophy. A means excellent work for a first class in philosophy. This is an introductory class. It does not presuppose any background in philosophy. Still, to get an A, you've got to show some flair for the subject. You've got to show not only have you understood the ideas that have been put forward in the readings and in the lectures and so forth, but you see how to sort of put them together in the paper in a way that shows you've got some aptitude here. You did it in a way that made us take note. That's what we try to reserve As for. Some of you will end up getting As, if not at the beginning, by the end of the semester. Many of you will end up getting Bs, if not at the beginning, by the end of the semester. Many of you will not start out doing good work. Many of you will start out doing satisfactory work or, truth be told, less than satisfactory work. Now look, I was an undergraduate once. And I know what it is to write a typical undergraduate paper. You sit down the night before and you had a couple of ideas. You thought about it maybe for a half an hour. And you meant to get to it sooner, but you had a lot of other things to do. And you throw it off in a couple of hours and maybe stay up late. You know it's not the worst thing you ever wrote, and it's not the best thing you ever wrote, and it has a couple of nice ideas, but maybe it could be better. It's sort of a satisfactory job. Yale says satisfactory means C. So many of you will start off the semester writing that kind of paper. And the fact of the matter is, some of you will start off writing worse papers than that. Because writing a philosophy paper is a difficult thing to learn how to do. It's exercising a set of muscles that a lot of you have not spent a lot of time exercising. Now it's not as though you haven't spent any time doing it. You've had bull sessions, right, with your high school friends or in your college dorm or what have you. But you haven't done it with the kind of discipline and rigor that we're looking for here. So, like anything else, it's a skill that gets better with practice. And what that means, of course, is you won't do as well at the beginning as you're likely to be doing toward the end. Some of you, unfortunately, won't do very good jobs at the beginning--and my TAs, I'll encourage them to be prepared to give Ds. If the vices of the paper significantly outweigh the virtues, that's a D. If the vices very significantly outweigh whatever virtues there are, that's some kind of an F. So the fact of the matter is many of you in your initial papers will get lower grades than you've probably ever gotten before in your life. I wanted to warn you about that. Now I say this not so much to depress the hell out of you, but (a) partly to warn you, and (b) to make it clear that I believe that it's a skill. Writing a good philosophy paper is a skill and you can get better at it. Consequently, most of you will get better at it. So let me make the following remark. Officially, each paper--you have three five-page papers. Each paper is worth 25% of your grade, officially. But--the remaining 25% is discussion section; I'll get to that in a minute--officially, 25% of your grade is for each of the three papers. But if, over the course of the semester, you get better, then we will give, at the end of the semester, when we're figuring out your semester grade, we'll give the later, stronger papers more than their official weight. For many of you, the first paper will be clearly the worst paper you write. And then we'll just throw that grade away; give greater weight to the second and third papers. If the third paper is the strongest, we will give even more weight to the third paper. There's no formula here, a great deal depends on the overall pattern, what your TA tells me about how you've done over the course of the semester. But this policy of giving greater weight, if you show improvement, is something that most of you will benefit from. So if you end up not doing well, the moral of the story is not to go running off and dropping the class, but to figure out what you did right, what you didn't do right, how to make the second paper better and the third paper stronger, again. And if you do show improvement, that will very significantly influence and emerge in terms of the impact it has on your overall semester grade. Because of this policy, I don't actually know when all is said and done whether at the end of the semester I'm any harder, whether I depart from the average or not. Let me quickly mention there's a fairly typical grade distribution for the overall grades of this, at the end of the semester. Roughly 25% of you are likely to end up with some kind of an A at the end of the semester. Fifty, 55% of you or so are likely to end up with some kind of a B. Twenty, 25% percent of you might end up with some sort of a C. Sometimes there's a couple of percent that end up worse than that. Unsurprisingly, you've got the ability to do decent work in this class and most of you have the ability to do good work, and some of you have, a fair chunk of you have, the ability to do excellent work, though it may take some work on your part to get to that point. The last thing I should say about the grades is why do I do this? It's really I try to do it as a sign of respect for you. I know that may seem like a surprising thing to say when I've just sort of gone on my little gleeful amount about how I'm going to fail all of you [laughter], but it's worth my saying you guys are so smart. You're so talented. You've gotten so far on your ability that many of you have learned to coast. It's not doing you any kind of service to let you continue coasting. My goal here is to be honest with you, right? Look, you're smart enough probably most of you to pull off some sort of B without breaking into a sweat, or at least not a significant sweat. So be it. But it's just lying to you to pretend that that's excellence in philosophy. So what I want to do in this class is be honest with you and tell you, "You've really done work here to be extraordinarily proud of yourself" versus "Yeah, you've done something okay" or "You've done good work. Admittedly, it's not great, but you've done good work." All right, that's 75% of your grade is the papers. The remaining 25% of your grade is based on discussion section. Now that's a lot of your grade to turn on discussion section. So the first thing I need to tell you is I really mean it. If you blow off discussion section, you're grade will suffer. So it's worth knowing in a general way what you need to do to earn a good grade in discussion section and here the answer is, perhaps the obvious one, you need to participate. You need to come to discussion sections having thought about the lectures, having done the readings, having thought about the questions that they raise, and you need to come to discussion section then prepared to discuss this week's set of issues. You need to listen to what your classmates are saying and say why you disagree with them. And not just that you disagree with them, but to raise an objection. Or why you agree with them. And when somebody else then attacks them, say, "Look, I think that what John was saying was a good point and here's how I think he should have defended his position," or what have you. You need to engage in philosophical discussion. If you're not participating in discussion section, you're not doing what the section is there for. Philosophers love to talk and we love to argue. The way to get better at thinking about philosophy is by talking about philosophy. So I'm putting my money where my mouth is. I'm saying, "Look, yeah, that's an important part of the class. So important that it's going to be worth 25% of your grade." Again, it doesn't mean--this is slightly different from the papers--that you've got to be brilliant philosophically to get an A. Rather, you've got to be a wonderful class citizen to get an A for discussion section. So, as I put it, in fact I think I put it this way on the syllabus, participation--and here I mean respectful participation, not hogging the limelight--participation can improve your grade, but it won't lower your grade. Nonparticipation, or not being there, that will lower your participation grade. Any question about any of that? All right. So I'm sorry to have sort of the long gloom and doom, but it seems that it's only fair to let you know what you're getting into. One other remark about the discussion sections. The way I think of it is like the conversation hour for your foreign language class. How many of you have had a philosophy class before? Thanks. Maybe 15% of you. Maybe 20% of you. Most of you have not. That's pretty normal. Don't go into discussion section thinking, "Oh, I can't talk. I don't have any background in philosophy. I've never done this sort of thing before." That's true for most of you. The way you get better is by talking philosophy. All right. Next remark. I guess this is sort of just one last connection with regard to grades. This is an intro philosophy class. The crucial point about intro is it means first class in philosophy. It doesn't presuppose any background in philosophy. It doesn't necessarily mean easy. Some of this material for some of you is going to be very, very difficult. And although the number of pages that you'll have to read are not--there's not a lot. Probably in a typical week, 50 pages, maybe less. For many of you, you're going to find it dense material. And although I don't really have the fantasy that many of you will read this stuff twice, if you had the time to do it, that would be a wonderful thing to do. Philosophy is hard stuff to read. Other remark about this being an intro class is that it's introductory in that the issues that we're talking about are kind of first run through. Every single thing that we discuss here could be pursued at greater depth. So, for example, we'll spend whatever it is, maybe a week and a half talking about the nature of personal identity, two weeks. But one could easily spend an entire semester thinking about that question alone. So don't come away thinking that whatever it is that we've talked about here in lecture is the last word on the subject. Rather, it's something more like first words. Actually, one other word about the readings and the lectures. With one exception, I won't be spending very much time talking about the readings. The exception is Plato, where I'll lecture, maybe two lectures, trying to reconstruct Plato's central arguments, at least the arguments relevant to our class. We'll be reading one of Plato's dialogues. But for the most part, although I'll occasionally, periodically refer to the readings, I won't spend a lot of time talking about the views in the readings. The readings you should think of as complementary to my lectures. The idea is that there's more to say than what I've said. And you'll find some more of what there is to say in the readings. Or there may be positions that I mention, but I don't develop, because I'm not perhaps sympathetic to them, and you might find somebody who is sympathetic to them, developing them in the readings. The readings are a crucial component of the class. You won't get everything you need simply by coming to the lectures. But equally the case, that the views that I'll be developing in the lectures are, although not necessarily unique to me, aren't all laid out in the readings. You won't get everything I'm talking about in the lectures, if all you do is the readings. They're both parts of the class. All right. I want to end by--I'm not close to ending, but the last thing I'm going to do is read aloud some student evaluations. I have found over the years that some students like me; some students don't like me. I don't know how to make this point any clearer than to share with you a sampling of the student evaluations. These are not actually from last spring, but they're typical enough that I was too lazy to make some new quotes. Quote one. These are actual quotes from former actual students. (1) "The lectures were clear and followed a very logical order." (2) "I thought the class was not always organized." (3) "I thought it was a very well organized class." (4) "Overall, I was unsatisfied with this course. Few substantive conclusions were reached." (5) Along the same vein, "I think he should avoid saying at the end of each segment of the class, ‘Ultimately, you'll have to decide what to think for yourself.'" [laughter] I should end the class by saying, "You will believe." Actually, I started the class by saying that. You will believe what I believe. (6) "It might be improved by presenting other views better and more objectively, since Kagan always ended a particular line of reasoning by defeating the argument if he didn't agree with it. He could be a bit more unbiased and tolerant of other perspectives." (7) "Lectures were sometimes repetitive or obvious, but occasionally, they provided new insights." (8) "I know that some felt the pace of the arguments was a little slow, but I felt that this was generally necessary, not only for the unphilosophy-savvy population, but also to cover all points." (9) "Extremely thorough and thoughtful. Receptive to questions. Brilliant." I like that one [laughter]. "Often long-winded." Hmmm. (10) "He does go around and around the same idea a number of times, which does cut down on the notes for the class, but it can get a little boring." (11) "Though I've heard students say he often repeats himself, I think this is a merit in a philosophy course in which arguments and thoughts can quickly become confusing." (12) "Shelly Kagan is a fabulous, resourceful, utterly convincing lecturer." (13) "He would work through arguments right in front of--" I like this one, because this is what I at least aim to be inside my head. Here's what I'm doing. Thirteen: "He would work through arguments right in front of us, which then helped me work through them on my own." (14) "Shelly is an incredibly dynamic lecturer." (15) "He's just in his own world babbling on and on [laughter]. I'd zone out with regularity." (16) "I have to say that Shelly Kagan is probably the best lecturer I had in my four years at Yale." (17) "He's the type of teacher you either love or hate." Now that's pretty clearly true. I wish there were some easy litmus test that I could just give you so you'd know which of you would be making a mistake taking this class. I don't know how to give it to you. Next topic, grades. (1) "He tried to intimidate us too much with his promise of impossible grading so that everyone took the class credit/D/fail, when we all probably ended up with As or Bs. His grading was not hard." (2) "I recommend it, but only credit/D/fail. Professor Kagan is harsh with grading." (3) "When Shelly says he's the harshest grader on campus, he isn't lying. I was consistently surprised by how poorly I did on papers [laughter]. The standards in this class are just different from all other classes." (4) "Kagan's reputation as a harsh grader is unfounded. If you put in the effort, the grade will reflect that." So that settles the question am I a harsh grader or not. The last question for the evaluation is should you take the class or not? Would you recommend it to somebody else? (1) "I believe this class is one of the most mind-opening experiences of my life." (2) "No. It's a waste of a course." [laughter] (3) "It gets kind of depressing at times, but I suppose that's due to the nature of the subject [laughter]." (4) "This course stands out as one of the more unique and stimulating courses I've taken at Yale." (5) "Excellent class. It made me think about life and death in a new way. What more can you ask for from a class?" (6) "I would not recommend it. The class just seemed to be a platform for Kagan to throw out random ideas and the students were never required to engage in any thought." Well, that clears that up. Let me end with a couple of other quick remarks. One--these are some of my all-time favorites from previous years. (1) "Not doing the reading didn't hurt me at all." Now, these are anonymous comments. I don't know who wrote this comment. But I do know this. Whoever wrote this remark is an idiot [laughter]. Whoever wrote this remark seems to be under the impression that the point of being at Yale is to spend $40,000 a year of your parents' money and get away with learning as little as possible. Well, for those of you who want to try it, you probably could pass this class and maybe even get an okay grade without doing the readings. There's no final exam. But still, it's crucial to understand, doing the readings is an important part of learning what this course has to offer. Different quote. "Kagan is a self-righteous little man" [laughter]" Now I've got to tell you, that bit about being little, that really hurts. Another one. "Great course. Wonderful professor. Fascinating subjects. The deepest thinking I've done in my life." Final quote. "This class taught me how to think more than any other at Yale." I don't know whether I pull it off. Pretty obviously, for a number of students, I don't manage to pull it off, but that's at least what my aim is. I'm trying to help you think. I welcome you and I hope you'll be back on Thursday. |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_3_Abstractions_1_Threads_and_Processes.txt | all right everybody welcome back to uh the third lecture of the virtual cs 162. um tonight we're going to uh dive right into some material and try to give you a programmer's viewpoint of this so the first several lectures we're going to basically talk about what you as a user level programmer might see from the operating system before we get really in depth in how the operating system gives that view so that's basically our goals for today we're going to talk about threads what they are and what they aren't and why they're useful so we'll give you some examples and we'll also talk about how to write programs with threads and some alternatives to using threads okay so um if you remember from last lecture we talked about four fundamental concepts were uh where the focus was really on virtualizing the cpu and one of them was the thread which is essentially an execution context uh fully describes a execution state with program counter registers execution flag stack etc we talked about an address space which is the set of memory addresses that are visible to a thread or a program we'll talk more about that we also then talked about a process i did see some questions on piazza about is something a thread or a process that's the wrong question so as i responded a process is basically a protected address space with one or more threads in it so typically you talk about a process with threads and the question of uh is it just a thread or is a process involved usually involves protection and then the final thing that we finished up with was this essential element of modern operating systems which is hardware that can do dual mode operation and protection which is really boils down to there being two states uh kernel state and user state where only certain operations are available in kernel state and as a result the kernel is able to provide a protected environment okay so uh just again recalling from last time we talked about how to take a single processor or single core which is going to be pretty much where we talk about for the next several lectures and give the illusion of multiple virtual cores so what you see here is the programmer's viewpoint is going to be that there's a set of cpus that are all talking through a shared memory even though there's only one actual cpu okay and we're going to think of threads as a virtual core and multiple threads are achieved essentially by multiplexing the hardware and time so we talked briefly about this idea of these three virtual cpus executing on the single cpu by loading and storing registers in memory as we go so we sort of load uh magenta's registers run for a while then uh the cyan ones and the yellow ones and etc and the thread is executing on a processor when it's actually resident in the processors registers and it's uh idle or asleep when it's not and we'll talk a lot more about the states of a thread once we get into the internals of the operating system but each each virtual core or thread has a program counter a stack pointer some registers both integer and floating point in most cases and the question of where it is well it's on the real physical core or it's saved in memory a chunk of memory we call the thread control block and the difference between these two things is whether it's actually running right now or in sort of a suspended state the other thing we talked about uh was this idea of an address space which is the set of all addresses that are available to a given processor or thread at any given time and we talked about how 32-bit addresses give you 4 billion bytes operations and 64 give you 18 quintillion addresses 10 to the 18th some of you may know 10 to the 18th as an exabyte but that's the general idea of an address space what's more interesting for our purposes here is going to be the virtual address space which is the processor's view of memory where the address space that the processor sees is independent of the actual physical space and in most cases that's involved some explicit translation so the processor brings out virtual addresses they go through some translation to get to physical addresses and um for the purposes of this particular lecture uh here's a thought that you can put in kind of use throughout the lecture which is this idea of translation through a page table and again we said 61c did talk about that we'll talk about it in a lot more depth but here we have two virtual i programs or threads or whatever you want to talk about them and they're operating in their address spaces with code and data and heap and stack and all of those things and what happens is the addresses come out of the processor and they go through a translation map and again however that works we don't really care for the moment and after they're translated they get translated into physical addresses so the code of this blue thread or process basically gets translated into this particular chunk of physical memory whereas the green code gets translated into this particular chunk okay now the question that uh is in the chat here is our virtual address is handled by the os or by the cpu hardware and the answer is yes so in reality these little translation maps are usually part of something called a memory management unit in hardware but the operating system is uh responsible for configuring these things by setting up something called a page table and so we will talk a lot about that so don't worry if you don't remember the details and in fact 61c hardly talked about that but for now imagine that there's actually a translation map that basically takes these addresses from processor one turns them into the blue ones and from processor two or pro virtual processor two turns them into the green ones and notice that if we translate it right then blue can't even touch anything that green is addressing because there's no way for blue's address space to get transformed into something that addresses the green physical memory okay and notice also i have the operating system down here which is completely separate as well so this simple idea of translation gives us uh quite a bit of protection okay all right now um are there any questions on that are we good and let's try to um keep the uh the chat for actual questions here so that we can uh get questions from people so let's keep this mental image here of uh page translation and uh how it protects green from blue and both and the operating system from both of them and let's move on so if you remember we talked then about uh processes the question here about where we store the page table is that's actually going to be stored in the operating system itself in a way that is not addressable to the two i uh to the green or blue okay and um the the reason that uh parts of uh user space can't address all of physical memory is exactly what you see here not every address you take all the addresses that are possible and they just don't translate to things that are green or white from the blue side and that basically prevents the processor the blue one from actually even addressing green or white okay and we'll talk more about this as we go okay now so if you remember when we talked about processes again a process is basically a protected environment with one or more threads here's one with a single thread your um your pin toss projects that you're going to be dealing with essentially have a single thread per process but real operating systems or i will say more sophisticated ones can have multiple threads okay and so a process is really this execution environment with restricted rights one or more threads executing in a protected address space okay which owns a chunk of memory some file descriptors and some network connections okay so uh a process is an instance of a running program if you have the same program running twice it'll have it'll be running in two different processes and why do we have processes as an idea okay is uh that means that those two processes are protected from one of one another and uh the os is protected from them so this idea of processes is really one in which it's the essential protection idea that we're going to be talking about in the early part of this class okay modern os is pretty much anything that runs outside of the kernel runs in a process for now okay so the last thing we talked about as i mentioned was dual mode operation and here processes execute in user mode and kernel executes in kernel mode but as folks on the on the chat have really uh talked about couple times here is if you think about this translation for a moment if blue is able to alter its own page tables then all of bets are off for protection right if blue can alter translation so that some virtual address which was previously not valid can somehow map to green or white then you know all the protection is broken there so really we need some way to make sure that the user code can't alter those page tables among other things and so that's where the dual mode operation comes into play okay so uh processes running in user mode um are running with a bit set in the processor says user mode and in kernel mode that's when the bit says kernel mode and it's only if you happen to be in kernel mode that you can modify things like page tables okay and we'll get into much more detail in that as we go on so this is still not quite enough okay you have to make sure that user mode apps can't just randomly go into kernel mode and execute anything they want because what's the point right and so we talked briefly about very carefully controlled transitions between user mode and kernel mode and that careful control transitions uh basically allow us to make sure that the only way to go from user mode to kernel mode is when doing things that the writer of the kernel supports okay and putting things in kernel mode is typically done only with extreme care because things that are running in kernel mode have uh control over all of the hardware and so typically only the operating system developer puts things in kernel mode all right we'll talk about some slight versions of that that are a little different as we get uh further in the course but for now pretty much only the developer the operating system puts things in kernel mode okay and here's an example of something we'll talk about today for transitioning from here's the user process running in user mode excuse me with the mode bit 1. making what we're going to talk about is a system call which goes into the the kernel executes some special uh function and then returns to user mode and that system call is very restricted so yes it turns on the mode bit to one meaning uh that we're well or zero in this case saying that we're in kernel mode but it only allows you to do that if the code you're calling is one of a very small number of entry points and so an example of this might be open a file or we're going to talk today about start a new thread or start a new process so this could be a fork system call and so that dual mode operation involves this extreme restriction okay so what are threads okay so a thread is a single unique execution context talked about that it provides an abstraction which might be a single execution sequence that represents a separate separately schedulable task okay that's also a valid definition threads are a mechanism for concurrency so we're going to talk a lot about that uh understanding that um because of threads you can have multiple uh simultaneous things that overlap each other and that can be very helpful protection is completely orthogonal okay so again that question of is this a threat or a process is the wrong question the process is the protected environment the threads run inside of it and that process would include for instance an address space plus a translation map through a page table okay all right by the way the mode bit there's a question about the mode bit here let's just say that a mode bit equal to one is user mode and zero is kernel mode but this is completely dependent upon the um dependent upon uh the particular piece of hardware and in fact in x86 there's even more than just two options here so for now there's user mode kernel mode okay that's what we need to remember okay now what are threads okay so protection is this orthogonal concept but let's dive into the motivation for why we even bother with threads okay so um and and yes i will say one other thing since this topic is coming up in the chat so one way in which things get added to the kernel is device drivers and we mentioned last time that those are weak points in reliability typically and those device drivers are things that get added uh only if you're a supervisor okay and uh and you've made a decision that you're you're willing to add this to the to the kernel and risk uh that device driver so what i mean by uh protection being orthogonal now again is that the protection is the environment the thread is the execution context okay so those are different things so process has one or more threads in it okay now for now uh processes contain their own threads and don't access other people's threads except through communication mechanisms okay so what's our motivation for threads so operating systems as you can imagine need to handle multiple things at once okay you know processes interrupts background system maintenance all of those things keystrokes i'm moving my mouse around i'm drawing things on the screen so there's many things at once or multiple things at once mtao by the way i made that up but we're going to use it for the rest of the lecture so operating systems need to handle mtao okay and how do we do that we do it with threads so examples are network servers have to handle multiple things at once because there's many threads there are many excuse me network connections that come in at once parallel programs well by definition if you have a bunch of cpus and you want to run something in parallel you need to do multiple things at once and a thread uh some threads could be a way to do that and when you talked about parallelism in some of the 61s one of the ways to do that is with threads okay now programs with user interfaces invariably need mtao so that would be again like i said mouse movement keyboard um if you have a voice interface so the the uh the microphone here is something um things get drawn on the screen these are all different independent things that can happen and so having threads available to allow them to happen in parallel is important okay and they're going to make it really easy to program network and disk bound programs have to handle mtao because you have to hide network and disk latency so the question on the chat as i mentioned multiple things at once is a term i just made up it's up top here multiple things at once okay so um you need to be able to if you're waiting for something to come off the disk or from the network you want to have a thread that's just sitting there waiting but not blocking everybody else up so you have another thread doing something else okay so um now let's keep the let's keep the chat down to just things that we're actively talking about well uh the concept of how processes communicate with each other is a much more interesting extended one so don't worry we will get to that okay not today but we'll get to it so threads uh basically are a unit of concurrency provided by the operating system and each thread can represent one thing or one task or one idea one chunk of code okay and so um that's that's going to be our model in this particular lecture so let's talk about some terms that you've heard thrown around as you've come up uh you know learning about computers so some definitions so multi-processing is sometimes used when there's multiple cpus or cores acting together on the same task okay multi-programming is something similar which is multiple jobs or processes not necessarily running simultaneously so the idea of processing versus programming sometimes gets at that parallelism versus concurrency multi-threading is just multiple threads in a process okay and so what does it mean to run two threads concurrently now i know then the 61s they try to get this idea of concurrency versus parallelism but let's take another stab at it but it thinks what it means for things to run concurrently is the scheduler is basically free to run the threads in any order and any interleaving and the thread may run to completion or time slice in big chunks or small chunks or whatever and so concurrently means overlapping with no control over how that overlapping goes so here's some examples here's multi-processing where we have a b and c are threads and because let's say there are three cores in this system all three of them are actually running at the same time so not only are a b and c concurrent but they're also parallel okay here's a different view where we have the same three threads but we don't have more than one core or processor we only have one processor and in this instance we can't actually have things running simultaneously so one thing that could happen is a could run a while and then b and then c okay where now we're actually running a to the end and b to the end and c to the end or we could interleave them a runs a while b runs while c runs well then a then b then c then b etc all right and notice that um these two options here could happen uh interchangeably on the same system depending what the scheduler does or whether you have multiple things running multiple processes could use up cores so that maybe if you have enough things running you get this interleaving or if you only have one thing running you get multi-processing okay and so the very important thing to note here is the moment we move into this idea of concurrency we have to design for correctness we can no longer just throw up our hands and write a bunch of code and hope it works because any code we write has to work regardless of what the scheduler decides to do for this interleaving let me say that again the moment we start with having more than one thread and a concurrent system we now have to start thinking about correctness and you could think about correctness and just write a bunch of stuff and keep changing it until it sort of looks like it works and i guarantee that is a bad idea because it will stop working at three in the morning or you can design for correctness with the proper locking schemes or parallelism constructs or whatever and we'll talk a lot about that as we go and then you can be sure that no matter what the scheduler throws at you this will do the right thing okay questions so we're gonna we're gonna try to teach you how to design for correctness that's gonna be our goal okay and again the difference between multi-threading and multi-programming is perhaps somewhat historical but multi-programming came up in the days of the original unix systems where there was only one thread per process so process had a single concurrency and add air space associated with it multi-threading kind of comes up in the in the era where you can have more than one thread per process so it's really kind of multi-programming might be one thread per process multi-threading might be more okay now we're going to talk about advantages in a bit okay so just hold on to that question so concurrency is not parallelism so look here this is parallelism a b and c running together at the same time this is not parallelism all of these are concurrency they're the possibility for overlap okay so concurrencies is about mtao multiple things at once parallelism is about doing multiple things simultaneously okay where simultaneously again if i were to take a slice uh across here and look at a given cycle on that multi-core processor for instance i would see there's an instruction from a an instruction from b and an instruction from c all running at the same time whereas if i have only one core i see that there's really only green pink or blue okay so example two threads on a single core system are executing concurrently but not in parallel okay each thread handles or manages a separate thing or task okay and but those tasks are not necessarily executing simultaneously okay now i'm not actually talking about amdahl's law which got brought up in the chat because amdahl's law is about the ability when you have parallelism to actually get use it successfully so if you notice here green pink blue might not uh you know the green might run a little bit and then you have to wait for uh pink and blue to finish before you can do anything this might by amdahl's law be very poor because the cereal section is large okay so um we're going to be uh talking about um we'll talk about parallelism a bit more as we go on okay now here's a silly example for threads okay remember my favorite number pi okay and so here's a thread where we say main and uh we compute pi to the last digit and then we print the class list okay so uh what's the behavior here anybody are we yeah so first of all this is going to run forever um until we unplug it or hit control c or something what about the class list yeah class list will never get executed so this particular instance is an example where uh running the first one to completion and the second and then the second one means the second one never runs and okay and furthermore if you think about this we have not told the system that it can interleave these because we haven't introduced any thread so this is a process with one thread and all it can do is first run compute pi and then run print class list so threads using threads correctly starts with giving the system uh notification of what can actually run concurrently and then the scheduler can start doing different things for you okay so for instance here we could add some threads now create thread here is just a um is just a general abstraction for however you create threads in your system but if this somehow creates a thread which is computing pi on argument pi dot text and this is somehow creating a thread that's printing the class list on classlist.txt what we've started out here is we've actually introduced concurrency to the system in a way that allows it to now start scheduling things in an interesting way all right create thread here is some uh abstraction of spawning a new thread i'll actually give you p threads later in this in this lecture um as one instance but this should now start behaving as if there's two cpus in the system virtual cpus and as a result we will see um digits of pi perhaps showing up in pi dot text interleaved with uh the class list getting printed okay and so why is that well because we've created two threads and now the scheduler can interleave them and go forward now notice that this previous version even if you had a multi-core with a hundred cores on it it's still gonna behave the same way because we haven't told the scheduler that there are multiple threads that can run okay we've only there's only one thread that's in this code okay now let's uh talk some administrivia here so as uh you know homework zero is due uh tomorrow and uh you really gotta get going on it what homework zero is particularly important because it uh gets you set up with all of the uh infrastructure for cs 162. gets you set up with um your github account and so on okay uh it gets you set up uh with your virtual machine gets you familiar with cs 162 tools and it reminds you a bit of programming in c which uh also i'm hoping that most of you went to the see review session yesterday i think that there were some videos that came out of that so you should be able to look at them but remember homework zero is through thursday tomorrow okay project zero was released yesterday and um you should uh be working on it okay it's due next wednesday and project zero is like a homework should be done on your own okay and um by the way i'm i'm very happy to hear that uh that uh the review session went well that was our intention um you know c is is uh a language that you probably don't have enough familiarity with uh yet you will have plenty by the end of the term and it's good that uh good to get moving on okay so um the uh other thing of course we mentioned is slip days you have uh because of the complexity of being virtual you have four slip days for homework and uh four slip days for project that's a little more than we normally give but um i would say bank those don't spend them right away okay um and uh because basically you know i'd save them for more the end of the term because when you run out of slip days and you turn things in late you don't get any credit so okay and um i don't have a direct estimate on project versus homework but teach the project zero is like a homework so get moving on it um the other thing which i hope uh everybody realizes is that friday that's two days from now is drop day so you need to make a deci this is an early drop class you need to make a decision about whether uh you're gonna keep the class or not okay and um uh you know it's very hard to drop afterwards i don't know um we had a student a few years ago who will remain nameless who didn't realize they were still in the class they had kind of stopped paying attention and about halfway through the term they realized they were still in the class and they went to drop it found out that it was an early drop class and they were uh they were petitioning their department they weren't in the ecs to allow them to drop it and last i'd heard that didn't go so well so um that was because i think they're one uh late drop that you get uh they'd already used up and so they basically were stuck so don't be stuck this is actually bad for you this is an awesome class i i like to think this is you know the most awesome class but perhaps i'm overstating it but if you don't want to be in the class drop it please um and let other people in so um i don't want to overstate that anymore but uh all right um and by the way as of tonight we're probably going to let the rest of the folks on the on the wait list into the class as well as concurrent enrollment so i think we are now uh everybody's in okay unless you don't want to be in which case you better drop okay all right any questions on administrivia i i just wanted to uh make sure i told that story about early drop okay uh dsp related uh policy you can talk to me individually and uh about it okay now i i have everybody's letters and so on so okay as far as collaboration policy i've said this before but i just want to state it again i'll i'll stop uh saying this every lecture but be careful about your collaboration okay watch it carefully so explaining a concept to somebody in another group is fine discussing algorithms or testing strategies is fine discussing debugging approaches is fine searching online for generic algorithms like hash tables or whatever that's also fine notice that these are not details about projects or homework these are higher level ideas or concepts okay that that's fine what isn't fine are things like sharing code or test cases with another group or individual including homeworks so i know there was a proposal on piazza to have homework study groups or whatever but in cs 162 the homeworks are actually graded and uh they are part of our checking policy to make sure that nobody's sharing code so um make sure to do your homeworks on your own and by the way the home doing your homework's on the own on your own we've chosen the homeworks carefully to help you with the projects so that's another reason why it's very important to make sure that you do the homeworks because they will help you along with ideas in the projects okay um you can discuss cons high-level concepts but no details okay nothing like well i would do this or i'd have a variable that did that you can't do any of that idea okay um copying or reading another group's code or test case is not okay copying or reading online code or test cases from prior years not okay helping somebody in another group to debug their code not okay okay yeah now um you know we compare projects and homework submissions against prior years submissions and online solutions and take actions if we see uh significant overlap and don't ask your friends don't put them in a bad position by asking them to give you an answer to a homework that's happened and it got caught and it's bad for both parties so all right enough on that so let's go back to the topics all right um if you a negative number on the waitlist i guess that's i have no idea what it means i think it means that there are uh that we are basically allowing everybody in now at this point but um i think we may or may not let new people in so um so let's go back to threads which is our big topic and we'll get to uh processes as well but um back to jeff gene's numbers everybody should know uh i brought this slide up the first day just to show you the huge range of numbers okay you know in uh everything from half a nanosecond or or uh hundreds of femtoseconds up into you know seconds okay and really these up here in the seconds uh or in the in the millisecond range here can be problems okay because disk seeks you know tens of milliseconds uh etc you can't wait all of that time tens of milliseconds before you do something else and so you want ways of overlapping uh i o and compute and so these this number set here tells you right off the bat a very good motivation for threads okay which is handling io in a separate thread to avoid blocking other progress now threads masking i o latency so disk does disk seek also include ssd we'll talk a lot about uh disks and ssd a little later in the term as you may be well aware so ssd typically doesn't have a seek time like a disk does okay because ssds are a solid state that's what ss stands for uh but there is an access time to the disk so even that access time uh is time that you could be off doing something else okay so it's not going to be as big as 10 milliseconds but it'll be microseconds that you might want to do something else in so um threads are in typically at least three states and when we get into schedulers and in the internal of the operating system you'll see more about these but roughly speaking a thread could be running which means it's actually got a processor or core and it's getting cpu cycles out of that hardware or it could be ready which it means it's eligible to run but not currently running or blocked and if you remember that picture i showed you earlier let me pop it back up here just because it's an easy way to say this maybe it's not that easy in this instance here if we can run a b and c what that means is that while a is running b and c are ready but not running okay they're on the ready list you'll see more about this as we go and as soon as a is the scheduler decides that a is done in this instance then it picks b off the ready q or in this instance where we're alternating between a b and c we have a we'll be running while b and c are ready and and uh et cetera okay and we're going to show you a lot more about how that actually works but for now uh what's useful is really this idea of running ready or blocked which is a new one which is that that thread went off to do an operation and it made a system call to the kernel to say read uh from disc or from a network and it's actually not on the ready queue and it's ineligible to run okay and this is where the true power um this is where the true power of the threads come into play because if we have two threads then one of them can be blocked and off of the ready queue while the other one's running now a question about uh can only one core uh can a core run only one thread at a time yes by definition a core has a hardware thread it's running um and that thread uh you know gets pulled off the ready queue now i will i will talk soon about simultaneous multi-threading where perhaps this gets a little fuzzy okay but um for now uh the other question how do you get from blocked to ready is basically the operating system notices that a thread is blocked on i o the i o comes in and then at that point it puts the thread on the ready queue and takes it out of blocked okay because it's ready to run because the thing it was waiting for is ready okay we'll get to that in much more detail uh not today so one of the once the i o finishes the os marks it is ready okay and uh and so then you know as a result we're gonna have multiple virtual cpus going through where any given core has one thing that's actually running and then the scheduler's got the rest of the things ready so here's an example where if no threads perform io then essentially they're always on the ready queue or running and so here we have two threads uh while magenta is running cyan's on the ready queue and while cyan's running magenta is on the ready queue and while magenta's running oh you guys get the point if we put io in here we get something more interesting right so here's an instance where um the magenta runs and at some point it does an i o operation and that's going to take it completely onto the ready queue and put it back um on the on a weight cue okay so off the run it's not going to be running it's not going to be on the ready q it's going to be on a weight cue associated with that i o and now this the blue item which we're assuming is just computing pi or something gets to just keep running and there's no reason to switch because there's no other thread in this instance that can run on the ready queue okay and then eventually when the i o compute completes the magenta is put back on the um on the ready queue and at that point it's it's now available to be run now a question here about can it go directly from block to running so that doesn't happen that way um just because the scheduler gets involved and again we'll talk more about that and it needs a chance to run its policy to decide um just because something's on the ready queue it may or may not be the next thing to run and so we typically go from blocked to uh to the ready queue not running immediately okay so um perhaps a better example for threads than computing pi although given that pi is a cool number i couldn't imagine a better example might be the following where we create a thread to read a really large file maybe pi and we create a thread to render the user interface and what's the behavior here is that we still respond to user input even while we're doing something large okay so this first thing maybe runs part of a windowing server or what have you or it's running the event loop for the for the windowing server this other thing is doing something that might be either i o or computationally intensive this is a great use for threads okay threads for eye for a user interface and threads for compute is a common common pattern okay everybody with me on that now um you know hopefully having done homework zero you know how to compile a c program and run the executable so this typically creates a process that's executing the program and initially this new process has only one thread in its own address space with code globals etc question we might make is how do we make this a multi-threaded process well i've kind of shown you this in pseudocode but once the process starts it can issue system calls to create new threads and these new threads become part of the process and they share its address space and once you have threads in the same address space now they can read and write each other's data uh and therefore they can they can collaborate with each other okay whereas if you have threads in separate processes they can't talk to each other easily and again i know that question came up in the in the chat a couple of times but the whole point of processes is to make it more difficult for them to share information that's the protection component and the only way you share in that instance is by explicitly deciding to communicate okay so let's talk about system calls this is uh part of we mentioned this earlier with our dual mode discussion so typically if you look at an operating system we've got this uh narrow waste idea um or the the hourglass kind of design where the dista the difference between user and system uh is at the system call interface okay so things running above here typically run in user mode and that's going to be all the stuff that you uh first start writing when you're in in user mode and things in the kernel tend to run in the system mode or kernel mode okay so user mode above kernel mode below and then of course we have hardware here and what's at this the interface here is system calls and so the way to get from user mode into the kernel through this interface is by making a system call okay now many of you are probably saying but i've never seen a system call um well you know the operating system libraries issue system calls language runtimes issues system calls and so in many cases the system calls are actually hidden below your programming interface okay now a very interesting question that's in the chat here is well our system calls standardized across all sorts of operating systems and the answer is no um in general uh windows and unix and uh you know os 2 and ios and all these different operating systems have different uh system call interfaces but there are at least one set of attempts to standardize there's a so-called posix interface and the posix system call interface is shared at least partially across a bunch of different operating systems okay another question that's in the chat here is if you're an administrator are you running in user mode yes most of the time however you're allowed to do things that take you into the kernel uh where you might not otherwise be if you're an administrator and we're going to tell you how that works but you'll have to hold off on that for a moment so for now let's assume that user mode um is uh what programs are running kernel mode is the operating system code and the way you get across this is an extremely carefully controlled transition okay now here's another way to look at this here we have a bunch of processes running on an os you know they you know an application or a login window or a window manager and all of them typically have um an os library that's been linked into them and those applications are used the operating system by making library calls and um if you haven't gotten very familiar with this already you will soon which is lib c that's the standard library for c programmers typically has a bunch of system calls in them that have been wrapped in a way and i'll show you what i mean in a moment about that that make it possible to essentially make a system to make a procedure call that then makes a system call into the operating system and when you do that um the uh the system call is the thing that makes the transition to kernel mode but the function makes it easy to use and this is why many of you who haven't taken an operating system class have never actually looked at system calls directly okay so um libsy is is question in the chat is is libsy standardized mostly okay mostly uh if you were to look at distinctions between linux and uh berkeley standard distribution unix and other versions ios the lib cs aren't always the same exactly they mostly have all the same things in them but their arguments might be a little different so on all right um but pretty much libsy all has the same almost always the same things in it all right okay let's think about similarity rather than difference for the rest of the lecture here because it'll be very easy to get lost in slight distinctions now the um the library i want to talk about for threads is called p threads and the p stands for posix okay so posix is that attempt to um standardize a set of system calls across a bunch of operating systems and there is a semi-standard threading interface and you can look it up that is called p threads and perhaps the most interesting thing here to start with is pthread create which is a function call in c that you can make that's going to create a thread for you okay and typically you have uh several arguments which are pointers to structures and i'll show you an example how this u is used in a moment that for instance come back with a thread handle that you can control that thread by stopping it and starting and so on some attributes of the thread which we will use much here and and also a function to call and some arguments and so really what does pthread create do ignore all the the noise in the argument list here it starts a thread running on a procedure of your choice and that procedure by the way i thought i would talk you through this because everybody ought to hear it once what the heck does void star parentheses star start routine parentheses parenthesis void star parenthesis the way you understand things like this is you go from inside to out so what it says is start routine is a go to left pointer that's the star two a function that has an argument a void star item which returns a void star so it's a start routine's a pointer to a function that takes a void star and returns a void star okay isn't that fun now there's also p thread exit which is something the thread calls to exit if it wishes although if the thread routine just ends then the the thread is done and then p thread join uh is something that says given a thread uh handle wait until that thread is done and then go forward and so join is a way to allow say a parent thread that has created a bunch of threads to now wait for all the threads to complete before it goes forward okay now uh what you should do is you should try p stands for posix all right what you should do is try when you're running in a unix style container including the ones that you've set up try man p thread so man is the man okay this is the manual uh the manual command and you say man p thread whatever and it'll tell you about p threads or man ls or man whatever this is the unix way to access manual pages what's fun about this or whatever depends on your notion of fun is you can actually go to a google search and say man p thread and it'll work or there are also lots of websites out there that you can look at to see information about pthreads but let's use this to get us some ideas about system calls and even an example of using pthread since we're trying to talk about what does a user see so what happens when p thread create is called so what i see here is i see a routine p thread create that i could call in my c code from main or something like that so what happens well remember that we're calling system calls and we're hiding it in many cases from users since we don't want regular users to have to worry about system calls and so really pthread create is a function that if you were to look inside of the of the library you've linked it with what you'd see is that um it's really a special type of function not written entirely in c that does some work like a normal function and then has some special assembly in it that sets up the registers in a way the kernel is going to recognize and then it executes a special trap instruction which is uh really a way of jumping into the kernel think of it almost as an error and then the kernel says oh it's not really an error it's a system call and by jumping into the kernel this way what we've done is we've transitioned out of user mode into kernel mode because it's an exception and then that place we jump to very carefully figures out what system call you want okay and so what happens is we jump into the kernel and the kernel knows that this is the create system call for a thread and it gets the arguments it does the creation of the thread and then it returns and that return and that point there's a special place to store the return value you're going to all become familiar with this and then it returns which takes us back to user mode and the bottom of this function which grabs the return values and then returns like a normal function so this function isn't the normal function this is a wrapper around a system call but as far as the user is concerned it looks like a function and you've just linked it okay okay and a system call can take a thousand cycles okay it's not it depends a lot on um what it's doing um it also you have to save and restore a bunch of registers when you go into the kernel and come out again and we'll talk more about the cost of that okay so doing system calls is not cheap this transition from user mode to kernel mode is more than just setting that bit there's a whole bunch of stuff around it and we'll talk about stuff in another lecture okay now um okay and when you create threads uh what you're doing is you're basically creating at least initially here a schedulable entity and uh in that instance multiple things can be running okay and whether we transition to a new thread on during creation is a different story which we'll get into when we get to actual scheduling but another idea that i'm just going to introduce for this lecture briefly is this idea of fork join per uh pattern which is a parent thread creates a bunch of other threads that run for a while these little squiggly things are threads and then they all exit but what i want to do is i want to wait until they're all done with their job so maybe they're running in parallel etc and then eventually what happens is we join namely we wait for every one of them to end and then the single parent thread continues after all of these are done now there is a good question here which i want to address briefly is once we enter this assembly code are we contact switching no no no uh c code when it uh compiles compiles into assembly it's just that we're doing some special assembly that's a little bit out of the scope of what a c compiler usually produces and that's why it's typically specified as assembly language okay the other thing is again don't get too worried about multi-core because what we're talking about works perfectly well if there's only one core in the system okay keep that in mind all right it will all run okay so now that we've got fork joint parallelism let's tie everything together so here's some code i bet you guys thought you were going to get out of this lecture without some complicated code what we got here is we got a main function call okay and in this main function call or which is the start of the program we have some uh malik statements we have some thread creates we have some joins okay and we could ask ourselves how many threads are there in this program we could ask does the main thread join with the threads in the same order that they were created we could ask do the threads exit in the same order they were created and if we run the program again with the result change so let's look here for a moment what we see here is we start by the way this main program has been set up to take an argument and if there's an argument then we use it for the number of threads otherwise we use two so assuming there's an argument of some sort we malik data that is big enough to hold the handles for a bunch of threads so these are p thread t items and um then we uh print some information like where the stack is okay and uh some other information like where is this common um item okay and then we go through a loop and we create a bunch of threads we create n of them and for each thread uh we keep track of its handle in a thread structure okay so now we've gone through let's say there are four threads we've gone through we create all four threads and we store handles to them and the reason we do that is so that we can join at the end but let's take a look at this p thread create what you see here is the thread function which is surprisingly as i mentioned before thread function is a function that takes a void star and returns a void star and by putting the thread function here we've implicitly said put a pointer to that function there so this creates a thread each time each loop it creates a thread that calls thread function and then finally we are going to go through the thread join to finish and if we were to run this with an argument of four what's going to happen is the first thing is it's going to tell us uh where the stack is so main stack and notice that what i did was this t function this t variable that's in the local variable of main i say take its address and cache and basically turn it into a long and print it and so here's an address 7ffe2c is an address that represents the stack for main okay and what's interesting is we do that uh for each of the thread functions when they run where we have this tid and we print out the uh the storage location for this local variable tid and notice how they're all a little different so each thread has its own stack okay and notice also that they run in different orders and that's because we create a bunch of them and then they get interleaved okay all right and so the question is sort of how many do we create well it depends on uh the argument do they join in the same order they were created well yes because we um we go through join and we do a join on the threads in order zero one two three and therefore the main thread waits for thread zero to finish thread one thread two thread three and if the thread exits early then when we go to join it just finishes really quickly okay and then if we run the program again with the result change yes the scheduling is going to be different so the threads may not wake up in the same order okay so there are five threads here total yes the four that we created uh with p threads in the original main one okay so there's always a thread created when you create a program okay now uh if you notice um now of course p thread exit uh basically when a thread exits it allows the join to move forward now um this join is not with null this is we're joining with this thread and this is an argument that we're just not using on on uh the thread join okay and there's four created by the for loop because in this instance the argument was four and we took that argument to decide how many to create so n thread equals 2 is only used if we don't have an argument all right so what about thread state so the states shared by all threads in the process address space okay if you don't call p thread exit which uh we could easily forget then what happens is um it's the uh the thread function exiting calls p thread exit uh basically um implicitly without you having to do it okay all right so the state is shared by all threads in the processor address space so the content of memory uh is shared i o states shared uh state that's private each thread in some sense is there's a thread control block in the kernel that's why i have it red and then there's cpu registers that are either in the processor or in the thread control block depending on whether it's running or not and a stack okay and what is the stack well the stack has uh parameters temporary variables return pcs etc so um one view of what we just did there was there's a bunch of shared state for the the threads which is a heap global variables in code okay and then the per thread state is there's a thread control block um and a stack and saved registers for each one of the threads now just to quickly be on the same page with 61c material if you remember what stacks are good for they hold temporary results and they permit recursive execution so if you notice here i have some uh pseudo code for c and notice these labels over here represent the memory uh that this if statement's at or the memory that this b is at okay so if the if statement's at a then b might be at a plus one uh these this is just a loose idea here so don't get too hung up on this okay but if we call a of one what's going to happen is a is going to come in and uh we're going to create a stack frame okay for procedure a to get called temp is one okay because that's uh a local uh argument and the return is going to take us to exit why is that well when we return from this version of a the next thing is exit and we're done okay and so those are all on the stack and now we sort of say well is temp less than two well yes it is because it's one in that case we're going to run b and what does b do well b creates a stack frame for itself but if you notice here um there aren't any local variables so the only thing we have is the fact that when b returns we're going to go back to a plus 2. why is that well b calls we call b here and then when we return we return to here okay so that return variable is actually put on the uh on the stack okay and now when c runs it creates a stacked frame and eventually we call a of 2 and notice that now we've got we're calling a again recursively so a the first version of a is here on the stack but by the time we go to the second version we're down here and is temp uh less than two no so at that point we're gonna output uh we're gonna print temp which is two and then we're gonna return and what do we return well we return to c plus one which is down here and c plus one is going to return c okay and then eventually we get back to a plus two we're gonna print our one we're gonna return and we're gonna be done okay so there you go that's a stack now the question of can uh is it possible for one thread stack to crash into another absolutely okay and if you look you could say well what's the layout with two threads well we have different stacks in the same address space and if this stack grows too far it's going to mess up the blue stack okay so you know we start having to ask some interesting questions how do we position stacks relative to one another how big are they and so on uh one of the things we'll be able to talk about uh you know in a few lectures is we can put what are called guard pages such that if this pink guy runs too too long and it goes into this empty space it'll actually cause a trap into the kernel which can then make a decision about whether to allocate more memory or to kill off the thread okay and the reason there are no protections in place is because multiple threads running in a process the process is the protection so this is good and bad right it's a liability if you run uh infinite fibonacci style things that run into each other because we all know everybody wants to do that all the time as you learned in 61.8 or it's a benefit because now yes the stacks are in the same address space but these two threads can easily share data okay all right and let me i'll get to the sharing in just a second here okay and uh how to allocate more memory uh oftentimes with a thread if you really are running out of space you may need to um there there's an argument you can use to say i need north stack space okay but um this is uh this becomes an interesting question of debugging we'll save that for another lecture but what i do want to say here is the programmer's abstraction is uh one of lots of threads all running kind of at the same time right an infinite number of processors whereas the reality is some of them run and some of them don't okay and and it alternates and that's that idea that we have to we have to create our uh code so that runs correctly despite uh the schedulers interleaving in fact i like to think of the scheduler almost as it's um it's a murphy's law scheduler is the way to think it's gonna do the the interleaving that screws up your code the most and so you need to design for all interleavings which really means you have to do the correct thing with respect to locks okay and so the programmer's view here might be that we have x equal x plus one y equals y plus x etc but in reality one execution could be well they do run one after another and another could be well x equal x plus one runs but then we go off and we run a different one for a while and then we continue or we run the first two guys go off for a while and continue okay so this reordering uh let's not worry about reordering so much as interleaving okay now um so there are many possible executions okay and i think i've i've hammered that point home already but you need to keep that in mind and before you give up and think this is impossible in fact proper locking discipline will take care of you here and and uh make sure that you run correctly under under all interleavings okay and that's um our job over uh the next you know couple of weeks is to give you an idea how you might possibly design things so that they work under a variety of interleavings okay so correctness with concurrent threads has this non-determinism component where each time you run there's a different interleaving okay so the scheduler can run the threads in any order it can switch threads at any time and it makes testing difficult in fact it makes testing uh of all possible interleavings not in principle even possible now there are folks in the department who know how to test up to a certain depth of interleaving and there's some pretty elegant uh results in that mode but um there's one instance where things can be done and that's when the threads are independent and they don't share any state and they're say in separate processes then it really doesn't matter what order they run because uh you'll always get the same answer and that's a deterministic result cooperating threads which are running in the same process suddenly we've got this non-determinism and we have to worry about it so if you could somehow make everything always independent then you've got deterministic behavior and you're in good shape of course even when you think things are independent they're all running on top of the same operating system and we all know that an operating system crash or bug can screw up pretty much anything but let's not worry about that for now so the goal is correct by design so just to point this out we have some race conditions so what if initially x is zero and y is zero and we have two threads one of which sets x equal to one and the other sets y equal to two what are the possible values of x when we're done well that's not even very interesting right it must be one because b doesn't interfere okay more interesting of course is this one where maybe thread a does x equal y plus one and then thread b says y equals two or y equals y times two what are the possible outputs there well it could be one three or five non-deterministically okay and so um more interesting okay now um that's because we're essentially racing a against b and uh this is bad code okay yes this has non-deterministic answers but you wrote code that should never have been written this way okay and we're going to try to avoid race conditions now let me show you a good reason for sharing there were some questions uh earlier so threads can't share stacks and the reason for that fundamentally is that the stack represents the current state of an execution and if you had two threads on the same stack they'd just screw each other up and you'd lose you'd lose that go back through my thread or my stack example and think through that for a moment so threads all have to each thread has to have its own stack now um here we have an instance of for instance a red black tree which you probably ran into in 61b maybe thread a doesn't insert and thread b doesn't insert and then i get if you just wrote code like this that tree would get screwed up okay um so and yes every thread has its own stack in uh in the process okay so um this particular instance of thread a and thread b is absolutely not going to work you're guaranteed to get a wrong result so some uh quick definitions which we are again going to go through in much more detail in subsequent lectures are the following so synchronization is coordinating among threads regarding some shared data in a way to try to prevent race conditions and prevent you from getting the wrong answer so some num some ideas mutual exclusion basically ensures that only one thread does a particular thing at a particular time so one thread excludes the others from a chunk of code it's a type of synchronization a critical section for this uh for this lecture is code that exactly one thread can execute at a time okay it's the result of mutual exclusion and a lock is an object that only one thread can hold at a time and it's used to provide mutual exclusion now these things we're going to talk in much more detail and we're actually going to tell you how to build locks that's going to be an interesting discussion in a couple of lectures but for now a lock is going to be a way to give us mutual exclusion and locks have a very simple interface they you can acquire the lock and you can release the lock and when a thread acquires the lock or tries to acquire the lock what happens is if some other thread currently has the lock other threads that are trying to acquire it are put to sleep and when that thread that has a lock finally releases it then one and only one of those threads is allowed to acquire it so this mutual exclusion given by locks okay namely only one thread can acquire at a time is going to allow us to start building correct code even with a lot of parallelism and concurrency in there okay and don't worry about how to implement this we will talk about that in great detail later but how would we use that in this example well uh the two threads would acquire a lock on the whole data structure or on the root of it okay insert three and then release it or maybe thread b acquires the lock inserts four and releases it um there's a an elegance to how to distribute your locks that you're gonna get to start thinking about like you could have a single lock at the root and if you grab a lock then you know that if a grabs the lock then it knows that thread b can't be anywhere in this data structure so it can just do its own thing and insert and then when it releases then b can know that a is not in the data structure and so on or you can start distributing locks throughout and you can do a more sophisticated thing where you grab a lock and then you grab another lock and so on okay but for this purpose of this lecture think of grabbing a single lock at the root that's going to clean things up for us okay all right now there's an interesting question here about uh single instruction operations on various shared variables and those are uh special types of hardware interlocks we're going to talk about where you don't actually need a lock okay and yes there's plenty of different types of lock although we'll also talk about that as we go forward now p threads again p for posix has a locking infrastructure that thing we just talked about it's called a mutex okay and you can initialize a brand new mutex and then the different threads in the system can use lock and unlock and uh it'll work like i just said okay so you you'll have a single thread that'll come back okay that's that mutex structure and then you'll use that mutex in different threads and as long as they all use the same mutex then they'll all have that locking behavior i just said and and p thread lock will grab the lock and unlock will release the lock okay and a mutex is just another name for lock in this instance okay so you'll get a chance to use these in homework one so here's an example of our thread function for our multiple threads so mutex is a type of block yes and here um our critical section uh could be where we have this common integer that's a global variable but we have a bunch of threads that are on it if you try to increment a global variable uh the simple version of increment here is going to get all screwed up if you have multiple threads on it by grabbing the lock incrementing and releasing the lock then you can make sure that that shared variable uh does not get screwed up okay all right now are there any questions on that before i um i want to say a little bit about processes now before we are out of time so what it means when a thread holds a lock is that the thread has executed the lock acquire operation whatever that is here it's p thread mutex lock and it succeeded then the the thread that succeeded and was allowed to continue has the lock okay so in this instance because this is now a critical section oh there's only one thread that's ever allowed to get past the lock at a time and so only one thread can be in this critical section at a time and we say that that thread has the lock okay and if a thread tries to acquire the lock and the lock is already acquired what happens is it's put to sleep until it's released and then it allow is allowed out so only one thread's allowed in this critical section at a time okay all right and and keep in mind this thread function is run by many threads simultaneously so we're talking about a scenario where many threads are running at the same time okay so let's talk about processes briefly before we uh run out of time here so how do we manage process state so we've been talking about for instance multi-threaded um multi-threaded processes where each of the threads has a stack and some register storage and then of course there's sort of global code data and files okay and um just to let me just say this again answering a question what constitutes a critical section is the piece of code that's being project protected by the lock okay that's the critical section it's the piece of code where only one thread's allowed to execute that little piece of code at a time okay and it could be many it could be many instructions it could be many uh things in there okay now okay so now what we're gonna i'm gonna move on to processes so if you remember the life of a process is the kernel uh execs the process we kind of talked about this last time and then when it's done it exits and we go forward so rather than threads we're actually talking here about creating a brand new address space and moving into user mode okay and once we uh are in user mode then there's a lot of ways that we get into the kernel like we talked about system calls interrupts are another thing that we will talk about where an interrupt might involve say accessing some hardware here and then eventually we return from interrupt or an exception like a divide by zero or um a page fault other things might bring us into the kernel et cetera okay but that's still we're this lecture is about user mode so what how do we create new processes okay so processes are always created by other processes okay so how does the first process start this is like asking about the big bang right well the first process is started by the kernel it's often configured as an argument to the kernel before the kernel boots and it's often called the init process and then that init process creates all the other ones in a tree okay and all processes in the system are created by other processes at that point now um we're only going to have time for a couple of these process management apis here but the first one here that's easy is exit so here we have main okay the process got created we execute exit it ends the process okay so this is not particularly um maybe interesting to you except for the fact that every process has an exit code which can then be grabbed by its parent where the parent is going to be the process that created it okay and by the way this is completely different from the dota knit segment in the elf library so notice that um this uh initial process the init process is actually a process okay that's running in the system and you can find it typically if you know where to look okay because it's typically if it exits then the then the system crashes and goes away so exit's not maybe that interesting except that it has an argument and zero means successful exit whereas anything else is non-zero says unsuccessful and the parent process can find that okay so what if we let maine return without ever calling exit well in that instance you actually uh get a an implicit exit as well okay the os library calls exit for you successfully all right the entry point of the executable is in the os library so the os library when you do a compile and link uh basically says that main is the program that gets called almost think of this as the the first thread actually calls main and then it exits and it kills off the process when you execute exit okay and um exit code and return code will essentially do similar things okay now and if you notice uh if main returns the library calls exit all right so let's look at uh something more interesting and unfortunately we're not gonna have a lot of time for this but hopefully you guys can stick around for five more minutes i want to talk about fork because fork is one of the most interesting strange things that we're going to talk about for process management because it's it's sort of a legacy uh operation in some sense but it's also kind of the backbone of a lot of the way that unix operating systems work and it's the one that you're looking at as well pintos is going to be similar to that and fork is used to create a brand new process and what it does is it copies the current process uh entirely so if you imagine that you have one process with all of its address space what fork does is it copies the whole thing to another process okay or to another address space and then it starts running in the other address space so now when you're done you have two identical copies of things running whereas before you only had one so fork is really taking and duplicating everything about a process okay and this is going to be a little weird so this is why i'm hoping you'll give me this extra five minutes with the return value from fork is uh basically one of three things if it's greater than zero then you know you're running in the original parent and the return value is the process id of the new child if you get back zero you know you're the new child and if you get back less than zero it's an error okay and pid here means process id okay so the state of the original process is duplicated in both the parent and the child okay pretty much everything address space file descriptors etc so here's a good example where uh we're running along and we call fork okay and at the point that we call fork as soon as we return from fork a very weird thing happens we now have two processes that are running two of them and those two processes are identical except for the thing that comes back from fork so in one of them we get a value greater than zero and the other one we get a value equal to zero and only when fork fails because uh say fork has run out of memory or something then only one of them comes back and we say fork failed now there was a question about fork a fork bomb that would be an instance where we are forking so many times uh that we have so many processes running that memory runs out and uh we're toast and often that's usually because of a bug in the operating system or something okay but if you notice um in this instance where uh things work the original process does not get killed it's happily running but it comes back with cpid greater than zero all right and the child comes back with it equal to zero and if you notice here so that means the parent is running there okay so let's take a look here so we uh we call fork and now suddenly we have two things that have returned from fork and two different processes and one of them the original parent that's what p stands for has cpid greater than zero which is the uh p id of the child and basically you can say well my i get my own pid i can say i'm the parent of that child otherwise you can say here's my pid okay okay now um memory allocated by other threads so typically the memory is going to be duplicated but you're only going to have one thread running initially in that other that other process okay now if you fork and fork again you would end up potentially with a tree that was a question except for the fact that if you could have uh the parent do the fork again but the child not and then you'd have three processes running okay so uh it may be a tree but it doesn't have to be a binary tree okay so again we're gonna make sure that we leave with this rather strange concept okay it's that once we execute fork in the original single process when we're done there's two of them two that are identical except when one of them runs fork returns a value greater than zero and when the other one runs fork returns zero and that is the way that those two processes know whether they are the parent or the child okay and you're thinking about this too hard if you try to think about somebody's created already at somebody else in fact what happens is the memory space is exactly duplicated and the original parent uh there are there is information in its process uh table as to whether it was the parent or not and so we will get the return back and the processes are put in a tree inside the kernel because the parent has linkages to all of its children okay and if the child calls fork then it becomes apparent for the things that it just created okay and so we get everything's duplicated including the stack and they're not the same address space because they're duplicated address spaces they have the same values not the same address space now um lest you go away from this lecture thinking that sounds ridiculously expensive how can that possibly be the right thing to do i will tell you that you play tricks with page tables so that you don't actually copy everything what you do is you copy the page table and you set them as read-only and you do some tricks okay that's going to be topic for another more fun discussion and yes linux has a version of a fork called spawn that doesn't actually do this copying but again we'll get to that later i want you guys to all understand fork for now okay and here is a race for you okay and uh the question is if you look what happens here if we fork and we say in the parent we uh have i equals zero time plus one and we go forward and in the child we go backwards what gets printed out does anybody want to uh make an argument about this what is it print does it get confused where i goes up a little and then down and then up a little and then down i see somebody says infinite loop yes great different i because the processes are completely different i is completely different and the parent goes up and the child goes down and they don't interfere with each other the only thing that happens is that interleaving might be different based on scheduling very good okay because the prints are printing the same standard out all right good um and then uh we will pick up with this next time because we're out of space but for exec by the way here look this is the code the way we create a brand new process is we fork a new process and then we call exec which immediately says throw out all of my address space and replace it with this new program and that's how a new program is created all right so in conclusion we've been talking about um yes it's true for global variables are copied as well so they're completely separate address spaces with no interaction because they're separate processes they are not threads so there's only racing for i o ordering on the same output screen but not anything to do with any of the computations all right so threads uh are the unit of concurrency okay uh they're abstraction of a virtual cpu a process is a protection domain or address space with one or more threads and we can see the role of the os library and the system calls are how we control uh access an entrance uh to the kernel okay and the finally uh the question was if the parent gets killed does the child die no what happens is in fact when the parent gets killed if the child is still running then a grandparent uh inherits the child and ultimately init inherits the child if it's still running so all right i'm gonna say goodbye to everybody sorry for going over a little bit but i wanted to make sure that we talked about fork uh may you all have a great uh holiday weekend remember no class on uh monday and also remember that uh friday is drop day so if you want to be in the class great if you don't please drop all right ciao all and have a great weekend bye |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_9_Synchronization_4_Monitors_and_ReadersWriters_Cont_Process_Structure.txt | uh welcome back to cs162 so um today we are going to finish up the discussion that we were having of the reader's writer's problem last time um and uh i have a little bit of a simulation through the code so we can kind of see how things proceed so if you remember last time we covered a lot actually we talked about among other things the whole idea of atomic instructions or read modify write instructions the primary one being test and set which everybody uh knows about typically and um but we also talked about swap compare and swap and load link store conditional the key thing to interpret with these was that what everything that's in between the braces here happens all at once in one cycle atomically in a way that cannot be interrupted by any other thread and so when one thread executes a test and set all three of these things have to happen and just to give you the um the way to interpret this what we said is you give it an address and then you simultaneously grab the value that was there and store a one grab the value that was there and store one and you do that atomically and uh that's enough to basically build all sorts of uh interesting synchronization primitives and we uh talked a lot about that uh in terms of how to make locks for instance swap is is similar but that's where you grab a value and store something else and so you grab a value store something else atomically compare and swap basically says you take what's in memory and if it matches one thing then you store something else there okay and swap and compare and swap are available on architecture such as the x86 and there's a version of compare and swap that actually returns the old value rather than uh success or failure and uh so these get uh the ability to do something very interesting okay complicated all right so but for instance tested set is powerful enough to uh build any sort of lock primitives you might want and so this was one that we looked at uh where for instance we had a guard value and the lock itself was in shared memory and so unlike trying to disable or enable interrupts we actually can build locks that work across a whole core with multiple or across a whole processor with multiple cores or across a multiprocessor with many chips and many cores in each chip and the reason for that is that things like guard and mylar mylock are actually in physical memory and uh and shared memory okay and notice that guard is actually shared across all of the implementations of locks whereas mylock you can have many different versions of this blue lock and they're all different locks and we gave an interface of acquire and release where you give the address of mylock acquire basically use test and set in the spin loop here until uh it found that the guard was zero and remember because testing said as atomic we grabbed the value of the guard store one there and uh what we showed was if we built a lock this way where we uh had a um well kind of a this red things like a lock over the lock implementation then we could very quickly grab the the guard lock check and see whether the blue lock was uh busy or not if it isn't busy we set it to busy and return from acquire so that means that the thread actually acquired the lock if it is busy then we put the thread on a wait queue and go to sleep and then release is kind of the opposite of that where we sort of wait to grab the guard if anybody's on the on the weight queue we just go ahead and give it to them wake them up basically otherwise we free the lock and we release the guard all right and the reason this is not busy waiting is because what's happening here in red is very quick because these the critical section of the lock implementation is fast all right were there any questions on that very quickly so you need the address of the guard and uh so yeah i guess technically this is test and set amp regard or right that's a that's a good catch on there i guess uh anybody else okay good and um in the if case uh you're talking about on release the reason we're not freeing the lock in the if case on release is because we're giving it in some sense over to the thread that we just woke up and so the lock itself always stays locked in that instance all right hopefully that helps okay so then the next thing we talked so what's interesting about this is this is kind of a skeleton implementation because we really didn't tell you how to deal with this putting something to sleep okay what happens if the thread is suspended well uh the guard is equal to one well that's exactly what you see over here in a choir right because guard is one uh we put the thread to sleep and we have to set guard equal to zero somehow atomically so that was some sort of dive into the kernel set guard equal to zero i didn't really specify how that was but the idea is it's very similar to what we did with interrupt disable where we put something to sleep and the interrupt got re-enabled when the next one started okay so um you need to always analyze a situation to decide whether busy waiting is uh going to be an issue or not this we would call uh not busy waiting because it's extremely fast you're not waiting for a long time you're just waiting for the the previous thread to finish its implementation of acquire or release okay now what if the thread gets interrupted while the guard is one that's very good that's the one instance where things might take a little longer um we won't worry about that case for now all right but that's very good catch and that's uh that's good thinking there but for now let's just assume that we're talking about this being very fast okay now um i also told you about futex so the problem with this uh implementation here with tess and set was that we don't really tell you how to deal with that sleep case because clearly with sleep you got to go into the kernel the good thing about this when you don't sleep is uh you're just using test and set on and uh sets and freeze on uh memory and so you're not actually doing system calls but when you have to deal with sleeping and waking things up there is a system call and so i gave you this good example of an interface that linux put together called futex for fast user uh space mutex and the idea with futex was you'd go ahead and take a system call into the kernel and it would put you to sleep okay and this is enough of a primitive to build things like what we just talked about with the test and set so that when you decided that you couldn't get the lock you could do this system call to go to sleep the tricky part about that is you got to make sure when you implement something that you tell it both what your lock address is and the value that you expect if things are sleeping so that if there's a change from the time you decided you need to go into the kernel to sleep to when you called the colonel uh with futex if something changed in their futex doesn't put you to sleep uh it actually just wakes you up right right away so you can check it again okay so that's what's clever about this implementation all right and this is an interface you can think of this to the kernel sleep functionality um and it's not exposed typically in libsy to users this is what libraries inside of libsy might use to make the uh the p thread mutexes and p thread semaphores and here was an example i'm not going to go over this again and the sleeping here the question is sleep until futex wake wakes somebody up one thread okay or n threads you can kind of say how many in the wake up okay where's that let's see do i have oh sleep tell futex uh wake yes you're right right there that's a typo thanks good catch so in this example i gave you rather than test and set we actually use compare and swap and swap and you should work your way through this but this is a a pretty clever implementation where the lock has three values it's either fully unlocked it's locked and only one thread has a lock and nobody's sleeping in the kernel or contested is a situation where somebody might be sleeping in the kernel and if we do this in a clever enough way you can make sure that acquire and re release are extremely fast assuming there's only one person grabbing and releasing the lock and no contention and it's only when somebody else comes along uh when the lock is held that then we move into this contested state and potentially put somebody to sleep and um you should look on this on your own i don't want to go over it again i talked about it last time but the key thing here is that the compare and swap and the swap these first two are the uh where we grab the lock atomically in a way that will make sure that we don't have more than one person actually holding the lock at a time okay so now back to where we were when we finished up we were talking about monitors as a good alternative to semaphores and a monitor is basically a lock and zero or more condition variables the zero condition variable is not very interesting because it's just a lock but a monitor is a lock and condition variables from managing concurrent access to shared data and it's a programming paradigm it's a way of thinking and condition variables are very special entities because their cues of threads waiting for something inside the critical section okay and the key idea here is to allow sleeping in a critical section uh in a way that the person writing the code can forget about it okay and we'll we'll show you we're going to go through the reader's writer's example in some detail again so you can so you can see uh how a more complex example works but the idea here is that you always grab the lock before you touch the condition variables and if it turns out that uh conditions aren't right you can go to sleep holding the lock okay and this this is the only situation where you ever ought to go to sleep holding a lock okay and what this is is the condition variable is a version of what we've been talking about in our implementations before for instance going to sleep with the uh with interrupts still disabled right well that's kind of the way the code works out it turns out under the covers of course we end up waking up somebody else who then turns interrupts on so it doesn't actually freeze everything up uh we also talked about the test and set example a little bit ago trying somehow to uh both put the uh put the thread to sleep while setting guard back to zero this is similar okay so condition val variables under the covers take care of the right thing with the lock but from the standpoint of programming you think of the condition variable as putting you to sleep uh with the lock and with semaphores you can't do that if you try to use a semaphore and it goes to sleep and you hold the lock you've just deadlocked your execution okay so there's some operations on condition variables which are useful here so one is weight where you have to give the lock you know why is that well because the condition variable uh when you puts you to sleep has got to somehow make sure the lock can be uh released signal wakes up a waiter and broadcast wakes up all of the waiters and the rule is always hold the lock when doing any condition variable operations okay always hold the lock so the problem we were talking about at the very end of the lecture was the reader's writer's problem and essentially the reader's writer's problem was one in which there's a database and the database uh has access rules and the access rules are that either you can have many readers all looking at the database at once or a single writer okay but not both okay and the reason for that is as soon as a writer touches the database it's gonna potentially disturb any consistency of that database until it's entirely done with their right and so we don't want readers to be anywhere near a writer and similarly we don't want two writers going on at the same time because that could screw up the consistency and so this model is we want to have some way to have a single writer or multiple readers and an arbitrary number of threads that might be trying to do each and we want to control the chaos and this is a great i like this example because it shows you how powerful monitors are compared to anything else and i challenge you to think about how to do what we're about to do here with locks or semaphores it's just a mess okay now why isn't using a lock on the single database sufficient anybody want to remind me why i don't want to just use a single lock yep we want multiple readers so if we have a single lock the problem is if we grab a lock before we read then nobody else can get in there to read and we already said we want to have more than one reader okay so we're already need something different okay we want many readers same time only one writer okay so to remind you again here's the structure of a monitor program using monitors and remember this is a mesa scheduled monitor program uh if you go back to a lecture last week we basically also talked about horror monitors okay and that's the scheduling of mesa or monitors talks about what happens when you signal and wake somebody up the mesa is far more uh common and it's much better on resources for the kernel and so mesa comes from the mesa operating system from xerox park the horror monitor uh example comes from a mathematician um but we're gonna do mesa and in mesa the typical pattern is the following you grab the lock and then you go into a loop and you sort of say well as long as the conditions aren't right i'm going to go to sleep which is take my condition variable go to sleep okay and i'll just whenever i wake up i check again that's where the mesa part comes into play okay so um will you ever be using a horror style uh mace uh monitor in 162 only in uh you know maybe exercises occasionally we will stick to mesa all right because that's what you're going to run into with any monitors and condition variables that you uh have out there okay so notice how we're doing this looping construct that's because of the mesa aspect so whenever we go to sleep that's because conditions were wrong but when we wake up we got to go check our condition again but if you notice between the lock and the unlock the way to think about this and the way i want you to think about this is we are we have the lock through this whole loop even when we're sleeping i want you to think that way even though you're you're more clever than that and somewhere under the covers you know that the lock gets released and reacquired but when you're thinking about whether your program's correct you want i want you to think that um between when i grab the lock and i unlock i have the lock and the reason that's so powerful is that means that i can look at all sorts of conditions i can look at multiple variables at once to see how they compare with each other i can do all sorts of stuff and because i have the lock nobody can go in there and mess things up while i'm looking at them okay and when i go to sleep because the conditions aren't right yes i'm letting somebody else fix the conditions but when i wake up i know once again i have the lock and i can check things again without worrying about somebody getting in there okay so now can things change before weight and after weight well uh if i find the conditions aren't right and i go to sleep with a weight things better change because if they don't then i'm never going to get out of this situation so uh when i go to sleep the hope here is that somebody else will come along and change the circumstances so when i wake up and check the while loop i eventually get out of here okay okay well except it's not you could say it's not equivalent to holding the lock but you could think of the lock releases being inside weight but none of the code that you see on the screen here is ever executed without holding a lock okay i realize that this is a strange sense of fooling yourself but you got to think of it that way so none of the code you see on the screen here runs without the lock help but inside of weight the lock gets reduced and released and re-uh established okay and we're going to go through this in more detail in the reader's writer's example just to see okay and once i unlock now i can do all sorts of stuff okay i don't hold the monitor anymore i'm doing something because i've already checked the entry conditions and then when i'm ready to finish i do the checkout and here's a simple checkout where i grab the lock i single signal somebody to wake up and i unlock okay and that in addition to signaling i might change some parameters of some sort that they might check and decide it's okay okay all right is everybody willing to go ahead and fool yourself a little bit that nothing between lock and unlock releases the lock okay that's the way you need to think while you're programming well you don't have to trick yourself once you get used to this okay this is really a way of thinking okay i like i like to think that with monitors i'm teaching you a pattern a paradigm for programming okay and it's a way of focusing your attention exactly as you say on the parts that matter so let's look at the basic solution and we rushed a little bit through this at the end but it was it was justified because i wanted to make sure you had it to mull over over the weekend but we have correctness constraints which basically say that readers can access the database as long as there aren't any writers and writers can access the database as long as there are no readers or other writers in the database and only one thread can manipulate our state information about who's where okay the basic structure looks like this the read excuse me the reader says well wait until there's no writers access the database check out wake up a waiting writer if there is one a writer says wait until there's no readers or writers access the database check out maybe wake up a waiting reader or writer if necessary okay now this particular solution that we're going to show you has writers as a priority that's good let's hold on to that thought because we can ask ourselves whether we have to do it that way okay but that will be what we've got here okay now this is where things got complicated but it's not really okay so these state variables are four integers and two condition variables those four integers keep track of the number of active readers that's a reader actually talking to the database the number of waiting readers those are readers that are just waiting ready to go but they can't go for some reason the number of active writers is the um the ones that are actually modifying the database and we know already what's the maximum that aw could ever be what's the maximum number of active writers one yep the number of waiting writers is the number that are waiting to get in the database and that doesn't have any limit and then we have two condition variables for sleeping depending on whether we're a reader or a writer okay and you'll see how this comes out in a moment and here was our reader code and what we're going to do there's those of you that looked at the number of slides probably took a quick in uh in breath there worried about how many there are but these there's really a simulation in here that makes things uh faster not as slow as it seems with all those slides so what a reader does is a reader first checks himself into the monitor which means you always acquire the lock and then we do a loop and our condition we're checking is as long as there's either an active writer or a weighting writer okay any number so we sum them together it's greater than zero we're gonna go to sleep okay there's that priority for writers that was asked about earlier and so basically we're gonna say well we have we can't run right now so we're gonna increase the number of waiting writers that's wr plus plus and we're gonna go to sleep on the okay to read okay we have to give it our lock as well so that we can release the lock under the covers and then when we wake up we're gonna decrement the number of waiting readers because why while we're not waiting anymore we're running something okay we're not active we're not active in the database yet so we're not doing anything with ar but we will keep looping in this uh checking our conditions going to sleep waking up checking our conditions going to sleep waking up until aw plus ww is zero okay and the reason we have to keep checking is because we have mesa semantics which means basically that uh even if somebody signals us we get put on the ready queue then we've got to require the lock and we wake up by the time we finally get to run and start emerging and once we emerge from condition weight it's quite possible that that conditions have changed again to make it unfavorable for us to run so we always have to check our entry conditions right but assuming the entry conditions succeed then we're going to increment the number of active readers and release the lock okay and now we're going to perform the actual read-only access in the database okay and then when we're done we recli acquire the lock because we're gonna alter the monitor we decrement the number of active readers okay and then we check if well if the number of active readers is zero and the number of waiting writers is greater than zero then we're going to go ahead and wake up a writer okay and otherwise we're going to release the lock now if you look carefully we know for a fact that there aren't any active writers to look at because we were an active reader so they ought to be sleeping okay and we know for a fact that um we know there aren't going to be any waiting readers either because there was a reader would get to go through now the question here about can we put uh wr plus plus before and wr minus minus what after the loop is that what you're asking i think so no because waiting writer plus plus means there's somebody sleeping on the sleep queue okay and so we only want to say waiting uh right excuse me waiting reader plus plus if we're actually going to sleep so wr plus plus and wr minus minus we're tracking the number of readers that are inside this sleep queue so they can't we can't go on the outside because that wouldn't help us there okay now why are we releasing the lock there before we go into the database um okay why why don't we uh yes to a lot more readers exactly okay so we have to release here so that other readers can come through this entry point okay all right now what about the code for a writer well we acquire the lock we have a different entry condition while the number of active writers or readers is greater than zero we go to sleep okay and uh if we succeed then we increment the number of active writers and release the luck okay okay so let's go back for a second why can't we con broadcast here okay somebody want to tell me why we don't broadcast to all the waiting writers okay so we only want one writer running at a time okay now uh i'm going to show you later that we could broadcast but for now let's do what seems obvious we don't want to broadcast because we only want to signal one at a time okay so that's our reasoning for the moment okay so here similar right to what we said before conditions a little different uh active writer plus plus basically says uh we now are an active writer release the lock perform the database access and checking out now we acquire the lock decrement the number of active writers and say basically now that if there was a waiting writer then we signal it to wake up otherwise if there's a waiting reader we broadcast to them all okay and release okay now uh alexander's comment there is correct which is why broadcasting will work it uh it's not as efficient we'll get to that in a second just hold that thought for for a few more slides here okay so um once again why do we broadcast instead of signal okay because we can have multiple readers all right the question about why we can't increment decrement waiting writers uh and uh and waiting readers on the outside of the loop um actually that you know that would technically work because we have the lock uh i prefer this i think it's a lot clearer because it shows what the conditions are um and uh if you you would never condition signal if nobody's waiting but let's keep the code this way now because i think this is a lot clearer okay let's not confuse things too much all right so why do we give priority to writers so notice we first check and see if there's any waiting writers before we decide to do something with waiting readers okay good so the the real answer there is uh that's what we've chosen the second answer is in general there are far few writers than there are readers so we just want to get them out of the way the third answer is that writers typically update the database and the readers are always going to want the most recent right okay now there was an interesting hold on a second here um let me just see now the other question was what happens if we signal and there's nobody waiting okay that won't happen here because we sort of check it before we do it but in general the key thing with a monitor is that when you signal uh if there's nobody waiting nothing happens okay so that's important um in fact that's a crucial part of monitors so when you signal and nobody's waiting nothing nothing happens okay and uh we will uh we'll talk about that a little bit later but the simple thing to imagine is if you've got a queue and you want to signal anybody who's waiting you just signal it rather than having to do something too complicated okay it makes things a little simpler all right now all right here we go we're going to see how this code works you ready so we're going to use an example we're going to have the following sequence of operators we're going to have a oops sorry we're going to have a read uh from read one from thread one to read two from thread two a write from thread three and then a read from thread four and initially we're gonna set um all of the variables equal to zero so ar wr aww okay are you ready so here we go so first of all r1 comes along and notice that we have nothing uh nobody in the system so everything's zero so first thing we do is acquire the lock we enter the monitor and then we say is aw plus ww greater than zero the answer is no so now uh all is well we increment uh the number of readers ar plus plus that gives us a one and we release the lock now i want to point something out normally you have to be very careful whenever you do plus plus on a shared variable ar is a and wr for that matter are great examples of variables shared across an arbitrary number of threads okay so why can we say wr plus plus or wr minus minus or ar plus plus without worrying about this because we have the lock exactly so notice we are in a critical section we acquired the lock we're releasing it down here everything in the middle here you think of is a critical section okay and so therefore we don't have to worry about the atomicity anything else okay and after we've released why release the lock there again before we enter the database right to allow more readers exactly okay so the condition variable and the monitor monitor is actually being used to control access to the database so that it meets our constraints so any any thread that gets into this database we've already checked its uh access okay and once it's there it's accessing properly and we're not violating the reader's writer's constraints okay now here comes the next reader r2 comes along acquires the lock notice it can acquire the lock because the lock is free okay so it's not a big deal now it's going to check this condition is aw plus ww still equal uh to zero it's or not greater than zero yep so we increment ar plus plus okay now we have two release the lock and now we've got two readers simultaneously accessing the database okay so far this is kind of boring but now the database could be accessed for a long time so these readers are busy doing something complicated there are no locks that are held and only ar is uh non-zero so no locks are held and the only this integer variable ar is two and uh nothing else is holding the system up so we're good okay now along comes the first writer now things get a little interesting so once again we grab the monitor lock that's great and now we say is the number of active writers plus active readers greater than zero yep okay so now we know there are readers in the database and so therefore we increment ww okay because there's a waiting writer we go to sleep and uh that's it so that guy is sleeping okay and he's sleeping where he's sleeping on this okay to right cue okay meanwhile r3 comes along and notice that uh the original two writers are still uh running okay now this is going to be a little different than the two writers or two readers at the beginning right so we grab the monitor lock oh by the way for those of you that are purist and want to think under the covers as soon as we do conditional weight notice we've done that with the lock right we're still in the critical section but when we do a conditional weight we not only give it the conditional variable we also give it the lock so under the covers the scheduler releases the lock at the same time it puts the thread to sleep so the lock is free but you as a writer of code should think of the lock as acquired for everywhere in between acquire and release okay this is i'm telling you to fool yourself because this is the way to think in this paradigm okay so that's why when the reader comes along we can grab the lock because it's free okay and now is aw plus ww greater than zero yes okay so at that point we're going to increase the number of waiting readers and go to sleep all right why did we do that technically speaking because there are readers going on here uh we should be able to let the reader go through and start reading but why don't we why do we choose to go to sleep okay we want to let we want to let w1 go first exactly okay so because there is a waiting uh writer we're going to go to sleep as a waiting reader okay so now you see the writer is getting priority so in fact what's going to happen is ar is going to go from two down to zero as those two original readers finish and then we're gonna get the let the writer go forward and then finally we're gonna let that reader come in okay okay our three can't start because there is a waiting writer so here's our status r1 and r2 are still reading away they're checking out the whole database w1 and r3 are are sleeping w1 is sleeping on ok to right and w3 is sleeping on ok to read all right are there any questions on our current state of the system we good all right gonna move up move on so now what happens r2 finishes r1 is still accessing w1 and r3 are still waiting r2 finishes which means they exit the database and they acquire the monitor lock which is free right they uh decrement the number of active readers okay so now we're down to one up there and now they're going to check the exit conditions and if the number of active readers is zero and the number of waiting writers is greater than zero then we're going to signal somebody well if you look there's still an active reader in the database so you could say that this guy exiting doesn't have to do anything because he certainly isn't going to wake anybody up so he's just going to exit releasing the lock and now we're done with him meanwhile we wait maybe a long time who knows and now r1 finishes acquires the lock decrements the number of active readers and so now we just hit a milestone we just went back to zero on the number of active readers at this point is activereader zero and waiting writers greater than zero yes that point we're going to signal on uh the okay to write condition variable that somebody can wake up okay so basically all the readers are done we're gonna signal writer w1 okay and then we release the lock okay now let me go back here for a second um i didn't actually simulate the release of lock but because we have mesa scheduling when we signal all that we're doing at that point is just putting the uh w1 on the ready queue okay and so there's nothing happens here when i signal to w1 other than taking off the sleep queue and put on the ready queue now if this were horse scheduling instead of mesa scheduling what would happen is this signal would cause the lock to and the cpu to go immediately to w1 and then w1 and do some stuff and when it released the lock we would go back here to finish up okay so that has some really nice mathematical properties as we kind of talked about last time but it's really hard on things like system cache and slow and instead what we did when we signal is we just take that waiting reader excuse me waiting writer put it on the on the ready queue and then we're gonna keep going okay and keep using our cache state until our quanta comes up all right so later uh when um w1 receives the signal from r1 that wakes it up uh it was put on the ready queue we said earlier it ran there was an interesting thing in piazza today which i answered so what actually happens here is w1 is going to have been on the ready queue it wakes up and under the implementation of conditional weight what it's going to do as soon as it wakes up is it's going to try to reacquire the lock inside conditional weight so it'll try to require the lock and if it turns out that the lock is taken because somebody else got in there before us then it'll go to sleep again but now this time it'll go to sleep on the lock not on the condition variable okay now let me make sure i understand brianna's question here so signaling does not release any any locks okay um if you look back here when we did conditional signal what it did was it just put that uh writer on the ready queue then we release the lock here so we actually decided to release the lock but we could do it whatever else we wanted and not release the lock right away okay it's it's uh so how do we exit the while root loop for uh the reader and writer okay so the condition variable okay to read or write is changed um can you explain uh what you mean by how has it changed okay well you're thinking about that question i'm going to answer carolyn's question here how do we exit the while loops well we exit the while loop here because something about these variables changed okay and so let's let me answer that question with the respect to the writer so when we signal the writer notice what we've done at this point we've decremented ar down to zero and we signaled the writer okay and so now ar is zero and so then we released the lock and that signal put the writer on the ready queue and up here the writer was on the ready queue okay it woke up it tried to reacquire the lock let's assume that worked it grabbed the lock and now it returns from conditional weight at which point we decrement the number of waiting writers because there aren't any anymore okay well no excuse me there's one less of them because we just woke up we come back to the while loop and now in answer to the question uh aw plus ar is now well what is aw plus ar so aw is zero ar is zero when i add them together they're no longer greater than zero so that's what just changed and so as a result we're actually going to exit the while loop increment the number of active writers release the lock and now the database has a writer in it and notice that active writers is equal to one and waiting writers is equal to zero okay questions okay and so the condition variables uh merely let us wait and when we wake up we recheck our conditions and assuming that whoever signaled us changed the conditions that would have put us to sleep then we'll exit the while loop that's exactly what happened here okay so when we're waiting on the lock but not the condition variable that would be a situation where we executed condition weight we went to sleep on the condition variable somebody signaled us we went on to the ready queue we can't emerge from condition weight without the lock because remember the way i'm telling you to think about this is you always have the lock in between acquire and release this is a critical section so the implementation of condition weight under the covers tries to reacquire the lock and when it finally does then it returns and now i know when i emerge from condition weight i know i have the lock again so a ok to write is just a condition variable so there could be many writers on there okay condition signal how does how does this signal back here decide which one to wake up it's undetermined okay it's think of it as a non-deterministic choice randomly picks one okay now uh in fact that can matter sometimes you may have to be careful not to uh assume that somehow writers are going to be woken up in the same order they're put to sleep if we ever have a piece of code where we want you to make that as an assumption we'll make sure to tell you that assume that they come up in the same order that they went to sleep but unless they're told that someone for some reason assume that's non-deterministic okay the question is if okay to writes a cue isn't there an inherent order well there may be some combination of uh put on the weight cue put back to sleep somebody else gets to run think of it as you're just not sure only one of them wakes up okay it may there may be many different reasons why they don't wake up in order okay all right so here's a situation where the writer is in the database and if you notice we have a waiting reader so he's still sleeping so we're writing away finally we finish we acquire the lock okay we have the monitor we decrement aw to zero and now we say are there any waiting writers no is the number of waiting readers greater than zero yeah look there's a waiting reader so what we do is we're gonna broadcast everybody so now here it's basically if uh it doesn't matter how many people are sleeping we're gonna wake them all up okay and then of course back here we're going to release the lock and go forward here potentially suppose there are 20 of them doesn't matter they all wake up but only one of them gets to run at a time so even if there's 20 of them that were broadcast it's the first one that grabs the lock again that emerges from condition weight and it's going to say oh look i'm going to set waiting readers to zero it's going to check its condition okay it's going to see the while loop is no longer satisfied it's going to set active reader plus plus i sort of hurried this along a little bit sets activereader to 1 and accesses the database if there were 20 of them the moment that this first one released the lock then the second one would succeed in grabbing the lock emerge from condition weight go through the while loop exit and go to the database etc so if there were 20 of them on that queue and we broadcast to them they would one at a time grab the lock uh decrement the waiting reader count increment the active reader account and access the database okay and then finally we uh we're done we acquire the lock we decrement the number of active readers we release the lock and we're all done at that point the database is idle and we have uh made our readers writers requirements any questions so the thing to think about here is notice how clean this was right with the monitor paradigm a lock and multiple uh a lock and multiple condition variables is very clean okay now this when you say this middle section here the access database is i don't know that i would necessarily call this a critical section because we can have multiple readers in there at once but it's the resource that we're doing some sophisticated control on where we're saying there can be multiple readers or one writer but not both at the same time okay so why again the while loop in the here you're asking why is there a while loop here that's because we have mesa scheduling because when we go to sleep when somebody signals us and we wake up it's quite possible that somebody else may have grabbed the lock before we did and change the conditions like suppose we're the last reader and uh we're about to wake up but what happens instead is a writer comes along and beats us to the punch and increments the number of active writers we're going to go to sleep again so you always have to keep checking the condition in a loop and when you can check the condition and you have the lock then you don't go to sleep and you know that you have the condition that's mesa scheduling all right so questions here can the readers starve well what do you think can we can the readers never get to run yep why well because we always wake up check our conditions again if some writer keeps coming along they may prevent us from going forward okay what if we erase the condition check in the reader exit so this is interesting right so if we say ar minus minus and then we say well if ar is equal to zero and there is a waiting writer suppose we don't look at that now what well the potential here is we could end up signaling a writer even when there are still readers in the database or we could signal the writer when there are no writers okay so does this still work or did we just screw everything up so the answer here is not quite where mesa so we don't care but it's the same idea we always recheck our condition so if we woke up a writer and there wasn't any reader or and there were still readers in the database the writer would go immediately to sleep saying oh there's readers in the database so even though we woke them up incorrectly the entry conditions take care of making sure that we never violate our invariance yeah it's kind of a self-checking thing and it means that relative to the uh the non-mesa scheduled or the horse scheduled situation this one you can be a lot lazier okay if you miss something now of course this is in efficient because we're going to waste time with scheduling but it sort of is uh much more likely to be correct and there may be situations where you can't get the exact uh conditions for signaling and as long as the uh the waiter checks its own conditions then you should be good to go okay and even if we turn the signal into a broadcast okay that's okay because even if we wake up a thousand writers only one of them will get to go forward and the rest of them will go back to sleep now the question is uh how much time do you spend checking in mesa not not a lot typically you don't loop too many times okay and the benefit of mesa is you get cash benefit the schedulers are simpler the code is much easier to verify and so the advantages of mesa scheduling far outweigh the disadvantages you know the advantage the disadvantage being you have to have a while loop and you might occasionally loop more than once okay and now the question is suppose you know we were keeping writers and readers separate but suppose we only have one condition variable you know what then well here's an example so here's the reader and writer and notice that um i only have one thing called okay to continue and so if my uh reader entry condition's not good i go to sleep on that and if my writer entry conditions not go good i go to sleep on that and then when i'm done the simple thing would be well i just uh i signal on okay to continue okay and this seems like it ought to work based on we just what we just said but if you're uh carefully think it through you can see that this might not be quite right because r1 arrives w2 r2 arrive r1 is still reading and you get a situation where r1 signal is delivered to r2 instead of w1 it doesn't quite work okay and so in this situation you're gonna have to actually broadcast to wake people up and that's really because we haven't distinguished readers to writers and so we just got to wake them all up and let them sort themselves out okay so when we get lazy sometimes we have to get really lazy okay to get correctness now this is going to have some inefficiencies to it in that there might be a lot of things that wake up and then have to go back to sleep okay um so as we know um so this wouldn't be as easy for uh well this would actually have writers with priority because any writers that happen to be in the system would uh wake up and run if there was a couple of readers that got to go first they might get to slip in there so it wouldn't be strictly priority based um and there's also a way uh you should this is for the you to think about offline but you can also arrange so that things come in exactly the order they they run such that readers and writers get to go in uh phases and so you don't have uh readers uh having lower priority than writers you can actually arrange for something more sophisticated but that's uh for you guys to think about so the exam is thursday it's getting close okay video proctored you've got you've seen that information okay we want you to have a your webcam and your phone you got to figure out how to position it that's all on piazza and you need to talk to uh the the tas if there's some issue with that topics are basically everything up to today's lecture if you notice we really haven't done anything new today we're going to talk a little bit more about implementation of threads in between but these are things you already know something about from the labs but um scheduling is not part of the exam so there's no um there's nothing on uh the lecture for uh there's nothing from wednesday's lecture so part of the video proctoring is requires a camera on your face so talk to talk to the head tas so homework and project work is fair game uh the so uh you know you should know what you've been doing on your projects okay so um midterm review there is one tomorrow there's a zoom link that's going to be mailed out and it should it may have gone out already i know that it exists and i know that the tas have it so they may not have posted quite yet okay so any questions so yeah the whole the point of zoom proctoring is the camera on on you while you're working so you need to figure out how to uh arrange that so um that's a good question actually uh that's a very good question yes you could have a cheat sheet uh both sides okay handwritten um i guess we forgot to mention that to you guys you're welcome to put together a cheat sheet okay but consider this otherwise uh closed book okay um we will give you any information that you need okay if you need man pages or or other things uh we'll give those to you you should be familiar with the simple calling sequences okay and it'll be more uh mostly pseudocode although try to try to write as correct code as you can if we're asking you to write c okay all right um so today's lecture uh potentially is uh as i mentioned in scope but that's because this is stuff that we already talked about last week okay um you should probably know the signature but we'll make sure that uh we'll probably make sure you have complicated signatures but things like open have a reasonably simple signature and if you uh transpose something we won't give you a hard time about that okay all right now uh the zoom proctoring info is on piazza okay um i think we've posted it we'll make sure that we have uh we'll make sure that it's posted if we haven't i thought it was up there so let's uh let's hold off on any any further questions about the video proctoring but um we do want this is part of making sure that uh we have a nice clean exam and so everybody can feel comfortable that everybody else is behaving themselves so okay and uh we're gonna the record the way the setup for the phones is gonna be in the cloud i'm pretty sure that's the way we settled on it so you don't need a lot of local space for make this work okay so can we construct so moving on to the topics here can we construct monitors from semaphores well it's pretty easy to make a lock with a semaphore that's just the mutex version can we implement condition variables this way so there won't be anybody for those of you that are worried about the video proctoring only the only the tas are going to be looking at the cameras it's not about everybody else so um can we implement the condition variable this way uh weight basically says uh for the semaphore that's the condition variable we just do a sema 4p and signal does a semi4v can every anybody say uh why they this might or might not work okay so semaphores have a queue right they can go to sleep so that's not you know this this has a queue associated with semaphore p what else is an issue here yeah so the big deal here i'll assume this is what you meant is that uh you can't go to sleep with a lock with a sim before right if you have a if you grab the lock and then you call wait you're going to deadlock your system because you'll put this to sleep and you'll hold the lock and everything will be broken so this can't work for a condition variable even though this seems like it auto okay so that will deadlock um does this look any better so this says well the way we do weight is we release the lock we do a sema 4p and then we reacquire the lock and signal just does a semi4v what do you think okay so the worry here that weight isn't atomic well the problem is not actually atomicity here the problem is history so if you remember if you think about it if you do a bunch of signals and then do a wait uh in this implementation the signals increment the semaphore and so the next weights are going to go straight through however weight in a monitor immediately puts you to sleep no matter what the history was okay so a signal to an empty uh condition variable does nothing and this implementation doesn't do that trick all right so this is uh it may be subtle but this would not give you a semi give you a condition variable portion of a monitor okay everybody with me when you go if whenever you do weight with a monitor it would you're always supposed to sleep the problem with this is if you do a bunch of signals and then do a wait the weight is not going to wait okay so i would think of it if you have signals prior and then you wait you don't go to sleep and that's actually not the monitor interface okay what if the sig the thread signals and no one is waiting okay that's a no op in a monitor but if uh thread in a thread later waits the thread waits with a fred v and nobody's waiting you increment and later the p just decrements and continues okay so anytime you go to sleep well i i wouldn't worry about system calls now because we're we're assuming that semaphores do whatever is required to put you to sleep okay and so probably inside the semi-four might be a futex or whatever we talked to the beginning but um yes anytime you go to sleep that's a system call but that's not really our issue here because we're assuming the semaphores have that figured out all right so uh the problem with the previous try is that p and v are commutative whereas signal and weight are not okay and so that's an issue okay and here might fix the problem what we do is we say wait uh does release sem4p acquire and then uh signal says if the semaphore queue is not empty uh execute semaphore b is this okay good this is not okay because semaphores technically don't let you check their cues okay so that's the issue okay and there's a race condition here and that the signaler can slip in after the lock release and before the waiter executes 74p turns out you can do this and you can even do it for horse scheduling there's one in uh one of the books uh not not your current one but you can look that up and it's a much simpler mesa scheduled solution which you could also figure out and as a hint it has something to do with the fact that when you're holding a lock you might actually have other variables integers that could keep track of stuff all right so conclusion was remember this this is the mesa monitor pattern okay the mesa monitor pattern is grab the lock loop until conditions are right unlock do something and then you exit by locking maybe changing some condition variables signaling and unlocking okay well this one's a little subtle so i will say by the way uh synchronization is the hardest topic that we'll cover in this class and um especially the first time you see these synchronization conditions it takes a little while to figure out what to look for so this is uh par for the course you've you've uh you've entered in to the uh the greater knowledge of synchronization here as a result of the last couple of lectures but you know it'll take away a little bit for it to settle in okay all right now i just wanted to quickly finish up because i want to move on to some other things here but if you wanted to do semaphores in c you got to be really careful because here's a situation where or not some force if you want to do synchronization support and see here's a situation where if you acquire the lock and then you run into some error you need to release the lock and return because otherwise if you just return then the lock is held and things might be broken um there's something which you can look up do a google on set jump long jump in c which is even trickier because uh this is the stack and so we run a runs b it calls something called set jump which really says that if we now call c d e here e can call long jump and it'll basically pop back to b and it'll pop off all those chunks of the stack that's a that's support in c but if you have that you can end up jumping back to b and the lock is still held so you got to be very careful with exceptions okay to make sure you can release the lock and this gets even worse if you have more than one lock going on so if you have lock one and lock two then you have to figure out how to release them all under errors and so um c is not great when you're dealing with uh lock acquire and release okay but you got to be careful um c plus plus uh is both worse and better for this okay the one thing that's worse is this if you notice this pattern here where i have a function i acquire the lock i call some other function and then i release the lock well that other function could get an exception so c plus plus and java and some of those others have exceptions well the issue there is if you throw an exception it's not necessarily going to return to doofu it's actually going to jump out of the caller and you've left the lock held okay so you might say well what i really do is i try do foo and i catch errors and i do the release okay this is a pattern you might be familiar with better in c plus is guards so this is a pretty cool idea here's a function where i grab a lock but i do it as a special guard lock and what happens is this gets notice that this is in the local variable position i know you don't necessarily know c plus plus a lot yet but here's a local variable position what that means is this lock variable was actually allocated on the stack on entry to this procedure and any exit of that procedure no matter what will release the lock and so you can have exits normally you can have exits because of exceptions and the lock will always be properly released so if you ever find yourself programming in c plus plus you want to make and using locks you want to make sure that you have something like this a guardable lock so that it will be automatically released no matter what causes your procedure to exit the other thing is python has a with key keyword which i'm sure you're familiar with which is similar okay and this is again with lock if there's any reason that this width gets exited this width block gets exited then the lock will be released and by the way width is good for all sorts of things including opening files and having them automatically close when you exit the block java uh yep rust has uh mutex guards there's all sorts of stuff okay most languages that are uh more powerful um and more modern than c certainly have nice clean ways of doing this i did want to point out java which you're all familiar with for various reasons actually has synchronized keywords so every object actually has its own lock inside of it and so this class account every time you allocate a new account object then if you have a public synchronized method then when you run that method what it does is it sets the the lock on that object and runs a lock and so back when we were talking about the uh the bank case if you make things synchronize like deposit then this balance plus equal amount this automatically becomes a critical section that's protected by the lock that's inside the java object okay so that's kind of cool and then java also has support for monitors and so in addition to that one lock there's a single condition variable and you can use weight and notify or notify all or the equivalent of uh signal and broadcast in java okay so monitors are well supported by modern languages as well okay so last topic we'll see how far we can get with this i wanted to do a couple of things just because i've seen some queries on piazza that suggested it might be helpful to have a couple uh of quick discussions here so if you remember we were talking about multiple threading models this particular threading model is the standard one that you're dealing with with python for instance or even linux every user thread has a kernel thread associated with it okay and the way that happens is that for every thread the kernel maintains the threads tcb of course thread control block but also a kernel stack for cis calls interrupts and traps and sometimes this kernel state or the stack is called a kernel thread okay so don't let that throw you for a loop it's it's state and why do we call it a thread well it's something that can be suspended and put to sleep inside the kernel okay so the thread is suspended but ready to go when the thread is running in user space and as soon as the thread goes into the kernel then the kernel thread takes over okay and there are actually threads that are only in the kernel so they still have a tcb they still have the kernel stack but not they're not part of any process and they're busy doing things for the kernel okay and so those don't necessarily even have to run at user level so pentos which you're now uh familiar with if you were to look at thread.c what you'd see here for instance is that the uh the kernel portion of a process or or uh is a process is basically this uh a four kilobyte page which includes both the tcb at the bottom and the stack at the top okay and so what does that mean that means that the kernel stack uh is maximum 4k in fact it's a little bit less than that that's however big the size of the tcb is okay and um so why is there a magic number here well that magic number is some random bits that if your stack happens to overflow it's likely to screw up the magic number and you might have some idea that there's a problem but the key thing here is that uh when you're in the kernel and you're running your pintos kernel thread uh you better not be doing fibonacci or anything super recursive because there's only a little bit of stack there okay and then also there is a page directory which points to a page table that's kept track of in the thread control block as well linux similar 8k so it's two pages okay so two pages to hold your stack with a thread control block and then something called a task stat struct down at the bottom that basically is associating the tcb with task state which could optionally be part of a process we're not going to go into that in detail right now but normally what multi-threaded processes are which is not pentos i'm sure you're all aware now that every process has exactly one thread in traditionally multi-threaded processes have a process control block for process and then each pcb has many tcbs so these are the tcbs okay thread control blocks and every process control block has many of them if there are many threads okay linux has one of these test trucks per thread instead and threads belonging to the same process share things like address space and so on so linux is a little bit um less clear about is it a process or is it a thread in some other process but for now rather than worrying about this this uh this idea that there is a single process control block that points at one or more threads is the way you ought to think about this and in pentos it's easy because you can only have one thread per process okay now i'll leave that so what does our kernel structure look like well here's two threads they each have their kernel thread right which means it's a stack and a process control block piece uh to describe the process but the kernel thread is this kernel stack okay and then the kernel also has code globals and heap for all the kernel code now there was an interesting discussion i saw on piazza about well colonel uh if the kernel is holding a data structure like a pipe where is it well kernel has got lots of memory space it's got a heap okay it's got global so the kernel has a bunch of data that's unique to the kernel that um you know it can store over time so the stack is not the only place for data to be stored in the kernel okay there's also heap and globals now if we go to a uh a process that has multiple threads then what do we see well i'm sorry about the typos here with global wrapping around here but there's basically uh code global heap that's shared um and then each thread has its own stack and it has a kernel stack so in this process number one with two threads in it there are two kernel stacks to match the two threads that are at user level in that process okay and the code globals and heap for the kernel here is a full picture where we even have some kernel threads that don't have a user piece to them okay so in that scenario now we can have these kernel threads uh doing things for the kernel we have the kernel portion of or the kernel threads associated with the processes everything that's got a kernel thread is now schedulable so the scheduler in the kernel chooses between different uh kernel stacks and therefore different threads and so when we give cpu time out and that's going to be next lecture i know you'll be studying but you should definitely make sure to come and hear about scheduling scheduling itself starts talking about how do we schedule across these kernel threads okay and of course because we have to enter the kernel to do scheduling then if we were running some thread in user space we first transition into its kernel stack and then we do scheduling among those threads so you know we gave you this example remember the thread s goes to t and t goes back to s and the reason i brought this up again is that scheduling just like i showed you we have threads have their own kernel piece okay and that kernel thread portion of a user thread is the thing that gets switched when we go from scheduling one to another okay and that actually was here as well so here was an example i gave you this is from a couple of lectures ago time is to the right where we have a user thread that's running it's got its uh program counter or cs eip instruction pointer and it's stack pointer it user space and then when an interrupt happens or or does a system call the very first thing we do is we switch over to the kernel stack and the kernel code so notice that these registers are now in red and they're actually pointing at kernel code and kernel stack and the remaining and the ones from the user are saved on the kernel stack so clearly if we want to start this user thread over again in user space we need to know where we were for this the user stack and the kernel stack excuse me the the user stack and the program counter and so we save them on the kernel stack okay and we also might and there's also a page table base pointer we'll get into more of that later and then we save out the extra registers and now here in the middle we're running on the kernel stack we've saved everything we need for the user portion of that and we're running away we're doing a system call maybe we're doing uh interrupt handling maybe we're doing scheduling okay but notice that the registers that's this box here has uh stack pointers and instruction pointers that are all pointing into kernel code all of the user stuff is saved on the kernel stack so that when we want to return now we basically undo it okay so we first restore the registers that aren't the stack pointer and the instruction pointer okay and then on when we do a return it returns the the instruction pointer in the stack pointer and we're good to go now the question is how does the interrupt know which kernel thread is associated well the answer is that um if you look at the lecture where i first introduced this the stack pointer for the kernel thread associated with the running thread is stored in the tss structure so at the moment you do an interrupt or a system call or any transition into the kernel what happens on the x86 is it immediately grabs that new stack pointer and inserts it into the stack pointer uh portion of the registers okay so that's that's how that happens and so when we change from one thread to another which i'll show you in this next slide then we have to swap out that register for tss because we've got a new kernel thread and if you notice here by the way that we started with thread a we ended with thread a we just ran in the kernel in the middle here the alternative is and you can look at switch.s we start with uh thread a we go into scheduling we restore thread b and when we're done it's now running the other thread another view of this is in fact here's the pin toss for instance one thread per process okay why do we need a tcb for every thread well because the tcv has all the information about its stack uh it's priority it's got a list pointers that point it with the other thread so there's a bunch of stuff about the tcb that's important for maintenance okay and if you notice here this is uh for example uh just a different view of what i just showed you here every kernel or every excuse me user thread has its associated kernel thread with the stack uh kernel stack on top of it okay and the instruction pointer is called the pc here's another view of what we were just talking about when we're running in user mode the instruction pointer is pointing at code in user mode stack pointers pointing at the user's stack and then this kernel stack points at the kernel pointer in the kernel thread associated with the running user thread okay and if you really want to know what is ksp well this represents that special stack in the tss the the thread state uh structure that uh holds that kernel thread for us okay and here's an example where we're running in the kernel uh in a kernel thread which doesn't have a user portion so notice we're running in kernel mode the uh the programming level is zero notice it was uh three back here as user mode okay and here we are uh in the uh you here we are in kernel mode and notice we're running kernel code and we're running on a kernel stack so the question about phi base uh is manually it's set basically as part of our scheduling okay if you notice here's an example where we were running the user code but now we've taken an exception or an interrupt or a system call and now at this point uh we're on the kernel stack associated with the user thread all right oops did i just crash here oops sorry so i wanted to say a little bit although we're running a tiny bit low on time guys give me a moment if you notice when pintos hits an interrupt what happens is um the hardware says oh an interrupt is something okay that interrupt for a timer for instance might be ox20 okay because that's interrupt you know number 20x what that means is it looks in a table and says oh this interrupt is 20 hex let's grab the the instructions to run and it turns out in pintos what happens is we push the number 20 on the stack and we jump to an interrupt entry which runs a generic handler but at that point notice we know which interrupt it was it was 20 hex okay this is an interrupt vector table yes okay and so this is basically how the kernel ties in all the interrupts to the code that should run so in stubs.s there's a generic handler if you take a look at your code what happens there is we enter uh the interrupt we save the registers okay so this is a situation where we go to unit user to kernel via the interrupt vector that's going to take us to this situation here where we are going to go into the interrupt vector table it's going to tell us where to start running and when we enter the kernel we're going to transfer so that we're going to start running on code associated with the interrupt okay so the various numbers you take a look at that table correspond to different uh different interrupts okay some of them are system calls some of their interrupts okay so here's a situation where now we just uh switched to the kernel thread for the process and we might have been pointing at code that was associated with the interrupt handler in that instance okay but we're running on the stack associated with the kernel thread associated with the user thread that was running okay and so here we now call the actual interrupt.c to handle the handler for inter timer interrupt okay and pintos has a second table which is a mirror of the first one okay but that table is for pintos handlers to handle the timer interrupt for instance okay and if you look in timer.c you'll see that so that timer interrupt is pintos's version of what to do with timers and it's going to deal with ticks okay and the tick updates a bunch of counters for threads and if it says well this thread's gone too long then it's going to set a yield flag and we know at that point that we're going to yield the current thread and do something else okay thread yield basically is on the path to return from the interrupt it's going to set the current thread back on the ready queue and then schedule to schedule the next thread which is next lecture which selects the thread to run and then starts running it okay it's going to call switch threads which is switch remember we talked about that earlier it's going to set the status to running if it's a user thread it's going to activate the process and so on and then it's going to return back to the interrupt handler i'm just giving you this very quickly so you can see this once okay so here's a situation where we were running this guy and um the time the scheduler decided the second guy's going to run and so switch switched us from the kernel thread on the right to the kernel thread on the left so that now when we go to return we're going to return to user mode okay so um each uh thread is going to have its own unique thread id okay and the kernel thread is uh associated very tightly with the thread that's running because this is the thread control block for that thread okay now so notice that we called timer interrupt we did tick we decided we needed to yield we decided we needed to switch so when we switch threads like this now notice what happens if we return from interrupt we're going to return voila to a new thread okay that's exactly how scheduling happens okay so we just undo all of this and we return and suddenly we're running the old the new threads excuse me instead of the old one and the old one is on a ready queue somewhere so this is the magic right the magic is interrupts uh timer interrupts happen they decide whether it's time to schedule they pick a new guy to schedule they take the current kernel thread put it to sleep they load the new kernel thread and then when they return from the kernel they're now running the new thread and we've just scheduled thread b instead of thread a okay this is my favorite quote i have to make sure everybody sees this dennis ritchie one of the designers of c and uh the original uh one of the original unices uh basically put this comment into the code in the core that runs switch it says if the new process paused because it was swapped out set the stack level the last call to save you this means that the return which is executed immediately after the call to a retinue actually returns from the last routine which did the save you so he's talking about switch look what it says you are not expected to understand this that's my favorite comment in any piece of code ever okay um now uh the question here is the time between timer interrupts decided by the hardware yes okay but only because the operating system has programmed it that way so the timer is programmable but once it's been programmed then it goes off on a regular basis because of the hardware okay now if you remember what scheduling is about scheduling is about deciding who's next and i'm not going to go into this now but i want you to know next time we dive into that decision making how do we decide which is the next thing to run okay the other thing i wanted to briefly say something about here if you give me just a few a couple more minutes here i'm almost done if you remember every process goes through a translation to take virtual addresses to physical addresses and that translation goes through a page table and that lets us basically make sure that every process has a protected space to run in and the kernel has a protected space to run in okay and so the address space basically is uh the primary mechanism for handling that translation and don't worry we're going to go into address translation in great detail in a couple of weeks but if you remember the basic idea was this one of mapping so the code for program one is mapped to a code segment and data is mapped to a data segment etc etc which is independent from program two and program two basically looks just like this uh particular view of memory that we've been dealing with and what we're saying is that this address space that you're used to gets mapped through the translation to specific places what does that really mean when we're talking about kernel space well what it means is the virtual space that a process sees in pentos for instance has kernel space at the top okay and user space at the bottom so all the things the user is using are in this bottom spot which has page table entries that point to physical memory the kernel space while it's mapped isn't available to the user okay so there are a bunch of page table entries that are in the virtual address space but if the user code tried to use them they would fault okay and it would get a page fault and why do we do it that way well if you look at the page table entry by the way this is going to be described in great detail in a little while a couple weeks the user supervisor bit basically says is a page table entry for the user or not if the page table entry is only for the kernel and we're in user mode then you get a page fault and you can look at page dir dot c by the way to see this so what does that mean that means that if we take an interrupt notice how my uh programming level went to zero then all of a sudden the parts of the kernel space that were unavailable are now available and these page table entries are ready so now we can have the kernel fully protected but all of that space is now available for heap you know there was questions in piazza where are the uh where are the pipes stored well they're stored in kernel space how are they protected well they're protected because the user is not allowed to access them okay and of course the base table page table base register points at a particular place in memory where this page table is and so when we switch from one process to another we just switch the base table okay all right and so for instance one kernel many stacks kind of looks like this okay that's the many threads and those stacks are only accessible when we're in kernel mode otherwise the users can't touch them okay all right questions okay i think we've run out of time i was going to uh look at a little bit more um a little bit more detail about the storage levels and kind of how um things like pipes and stuff worked we'll save that for another uh we'll save that for next lecture this will not be on scope for the exam all right so what i want to say here in conclusion we've been we talked a lot about monitors um i will hope uh that everybody kind of has a good idea now how the monitor works so the monitor is a programming paradigm it's a lock plus one or more condition variables you always acquire the lock before accessing any shared data and then in the critical section of that lock you check parameters and potentially go to sleep okay and so you always go to sleep but only when you hold the lock okay monitors are the logic of the program you wait if necessary you signal when there's a change so that waiting threads wake up and monitors are supported natively in a bunch of languages we showed you that we went over in great detail in the reader's writer's example um we talked about kernel threads which are stacked plus state for independent execution in the kernel every user thread paired one-to-one with the kernel thread in a typical pin toss certainly and also in typical linux uh which is not running threads at user level okay and the kernel thread is the thing that lets you go to sleep so the good thing about every thread having a kernel thread is you can put it to sleep if you try to do i o and none of the other threads are affected okay next time we'll talk about device drivers all right and so the page table base register one last question on the chat here is switched uh from uh one to another when you change the pcb not the tcb so when you change which process you're and then you got to change the page table base register if you're going from one thread to another you don't have to change it and actually just i had a little bit of a out anyway i i could show you that later but if you were to go back and take a look at the the slides where we were talking about um where we were talking about switching from one thread to another what you would see there is uh that i basically uh change the page table base register to page table base register prime so all right um i think we are good so i want to bid everybody uh adieu i hope you have a good night and uh we will see you on wednesday i hope and good luck studying all right have a good night everybody |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_1_What_is_an_Operating_System.txt | all right welcome everybody uh to a new term um we seem to have somehow managed to get 292 people so far on the um on the zoom that's pretty impressive um i'm going to uh be uh basically lecturing um twice a week from here and hopefully this will work out well feel free i think to avoid the chaos let's have people uh type their questions in chat i have two screens so i can kind of watch and uh let's just see how this goes if everybody's ready so uh welcome to the uh virtual version of cs162 i guess since we talked about virtualization this is uh particularly appropriate um can i get a couple of uh thumbs up or something in the chat just to make sure that the sound is good with everybody and please disable your cameras if you could um all right great so my name is john kubatowicz i'll introduce myself a little bit later but um today what we're going to do is uh we're going to dive right in and kind of ask some questions like what an operating system is since you guys are taking this class in theory and maybe we'll say something about what it's not um we're going to try to give you some ideas of why i find operating systems so exciting and uh oh and we'll tell you a little bit about the class but let's dive right in before we get uh to the operational parts of the class um and i want to point out that interaction is very important this is going to be kind of hard uh even when i teach live normally um the interaction portion of this uh is challenging on the first day but once we get into the second day um i think uh our second time i think we should be able to ask a lot of questions it'll be great so um there's question about whether slides are going to be posted yes we're going to post both the slides and the videos um still getting things um moving forward here uh but let's um keep in mind that you're free to ask questions and let's move forward so um first of all let's start with this which is uh what i like to call the greatest artifact of human civilization um does anybody recognize what this might be do we have any thoughts other than random yeah very good we've got several people that basically said internet and in fact there it is so what's a great uh about this is that we have billions of people all and and devices all interconnected in one huge system and uh lest you think this is a plug for one um 68 or 268 this is uh you know the networking is just uh one of the amazing aspects of this huge system and the operating systems are really what makes ties it all together and essentially uh ties everybody else together which is part of what's very cool about the internet and of course you're all off on the internet right now this is all entirely virtual so i guess it's kind of appropriate to start a class this way um and i i had a couple of people comment that this looks like a big brain yes in some sense it does it's got uh huge numbers of uh connections and interconnectivity and multiple redundancies and so the notion of this as a brain is not entirely off base i think what we're going to try to do as the term goes on we're going to try to tame some of the complexity that's hidden inside uh both the internet and the devices on it and see how to understand it um so you know basically if this were 168 we would probably dwell on slides like this but this is pretty impressive kind of the original arpanet couldn't handle more than 256 devices and uh now we've got um you know billions four and a half billion devices penetrating maybe sixty percent of the world uh which is just astounding uh place to come and um you know some of the deadlines here or some of the dates that are particularly interesting in here is i think in the sort of the very early 90s is when the world wide web took off and that was when suddenly the internet became something that lots of people could use and um and turned it into what it is today um the other thing that's kind of interesting which uh is in this this picture as well is this idea of the diversity of devices there are literally this is a graph kind of of um the uh sometimes i like to call this colors graph or it's bell's law which is the number of devices uh that a person has and if you look originally it was kind of one computer for millions of people back in the original day and then as things moved down now each of you probably has hundreds of devices that are working for you um modern cars have hundreds of processors in them you all have cell phones and laptops and um and little computers inside of devices and thermostats and so on and so this this graph is kind of funny it's almost an inverse moore's law graph where um the number of uh computers per person is uh increasing as they get smaller which is kind of interesting um and there's a lot of time scales and so when we start talking about how to make this whole system work we're going to have to figure out how to deal with something that's uh you know nanoseconds or femtoseconds in some cases up to things that are seconds and and tens of seconds um and somehow the system's got to work across those time scales and that's there's a little magic involved in that and we're going to try to talk about this i did see somebody say femtoseconds no way but a lot of the laser communications uh do operate very rapidly um so we'll we'll hold off on the femtoseconds here but sub nanoseconds definitely these days all right now operating systems are really at the heart of all of this basically you make incredible advances continuously in the underlying technology and what happens is somehow each device is a little bit different every technology generation generations a little different and you got to provide uh somehow a consistent programming abstraction to applications no matter how complicated the hardware is and you gotta manage sharing of resources okay so the more devices we have connected the more resources there are to share and as we get closer to the end of the term so i'm going to say the last third of the term we're going to start talking about some of these uh very interesting peer-to-peer systems that are out there that basically allows us to have huge storage systems that span many devices i did see some questions about postings of slides we will definitely um in the future i'll have them up earlier than um the day after lecture uh so some of the key building blocks to operating systems are really uh things that we're gonna we're gonna learn about in class uh processes threads concurrency scheduling coordination many of these things you've learned about in the 61 series address spaces protection isolation sharing security and that level of security is going to be both at the device level and then as we build out into the network you know we'll talk about things like ssl and then we'll talk about uh more interesting security models as we go and there's going to be communication protocols um there's going to be persistent storage um there's projects i've worked on in the past i'll mention briefly later where it was interested the interesting question was how do you store information for thousands of years without it being destroyed we'll talk about transactions and consistency and resilience and interfaces to all these devices so this is um a class that spans a lot of interesting topics all right and so for instance here's something you do every day without thinking about it multiple times you got your cell phone and you want to look up something uh on some device uh you know web you know a web page or maybe you're using an app and what happens there well the first thing that happens is uh there's a dns request that tries to figure out what the uh internet ip address is to where you're trying to go and that goes to a bunch of dns servers uh on the network and they return information that helps now your say cell phone route through the internet which is a very interesting uh device in and of itself consisting of many pieces um and that may go to a data center somewhere with a load balancer that then will pick one of several possible devices out there which will then maybe do your search and retrieve information from a page store put it back together into some uh page that you can use and then you get the result back and you know you do this every day and don't really think about it too much and uh once you start thinking about it gets pretty interesting like for instance how is it that those dns servers stay consistent and why is it that it's not possible to hack into them well in fact uh back in the middle 2000's people were hacking into them i'll tell you a little bit about that later um and how do you make sure that the packets get enough uh priority when they come into an operating system that maybe your your particular query doesn't get delayed a long time so there's some scheduling questions so this is a pretty complex system and it's every time i spend the time to think about it i'm amazed it works okay it's pretty impressive and hopefully by the end of this class you'll have enough knowledge of what's going on in all parts of the operating systems and the networks that you too you know you'll be much smarter than when you started the class of course but um you'll be able to appreciate and sometimes maybe uh wonder why it is that it actually manages to work or be impressed that it works so yeah but what's an operating system okay um what does it do we could ask that question um so most likely you could you could say well from the standpoint of what it does this is like being a physicist that's maybe measuring a bunch of things you say well it's memory management it's io management it does scheduling does communication um does multitasking or multi-programming um you could you could ask those things you might ask is an operating system about the file system or about multimedia or about the windowing system or the browser you know back in the 90s there was a lot of fighting between microsoft and bunch of other companies about does the internet browser constitute part of the operating system and you know depending on your point of view that may still not be a resolved question but um anyway it was one that has been asked so um and also i would ask everybody to turn off their video if they could please while they're while we're talking so um so is this you know these questions only interesting to academics it's a question you might ask um okay so uh hopefully not hopefully it's interesting to you um could i ask the person just came in turn off your camera please or turn off your video because that will show up in the recording so a definition of an operating system is uh no universally accepted definition is part here everything a vendor ships when you order an operating system might be a good approximation uh but it varies pretty widely um it might be the one program running at all times on the computer okay uh that's the kernel you'll learn a lot about kernels as the term goes on uh but you can see these two points of view are are different um you know nobody would disagree that the kernel is uh the core of the operating system they would disagree pretty widely about is everything that microsoft ships with a windows product part of the operating system okay probably not all right so you know as we try to to drill down onto what an operating system is you're gonna have to keep in mind that we're gonna talk about things it does and pieces that are important but maybe you'll never fully know what an operating system is um so it's a typically uh among other things a special layer of software that provides the application access to hardware resources all right so it's convenient abstractions of complex hardware devices um protected access to shared resources okay security and authentication yeah communication okay uh so we could look at something like this where we have some hardware and it's the fact that many applications can simultaneously run on the hardware is something that the os has provided for us okay so yeah that makes sense and you will understand exactly how this works uh actually in a few weeks um but maybe we could do it this way well operating system what's the first word operating well that comes actually from uh there used to be people like there was a switchboard operator believe it or not when you made a phone call they actually had to plug you in to the right connection and make the wires connect then there were computer operators which were people that basically sat at one of these big machines for a long time and uh made sure it was running correctly um and then of course operating systems uh the operating part of it then became more about well we're making sure that the disk is operating quickly or the network is operating correctly or the graphics cards are operating correctly all right um what about the word system well this is interesting as well so what makes a system so a system is something with many interrelated parts where typically the sum is much greater than the sum of its parts and uh every interrelated part potentially interacts with others and of course that's an n squared uh level of complexity at least and we're going to have to come up with apis and uh other clever techniques to avoid n squared complexity here because things are complex enough as it is um and making a system which i showed you earlier the internet that's a system that has billions of components to make it robust and not fail is going to require an engineering mindset so you guys are going to have to start thinking like engineers and we're going to give you some tools to really think about how to make something that complicated actually work um again the internet is something which you know it's a great example of a big system that is amazing that it works and it's actually it doesn't always work i i'll pull up some stories later in the term about times where it definitely didn't work my favorite being one time where there was a single optical fiber that divided the network into two pieces and it went through this tunnel in the middle of the country the u.s and a truck went in and blew up and it melted this fiber and it actually temporarily partitioned the network so um there are times when it just doesn't work properly okay so uh systems programming is an important part of this class and you're gonna do a lot of it okay you're gonna learn how to take a system like this and figure out exactly how to make it work and that's um that's exciting okay you're going to get some of the tools you're going to learn about git you're going to learn about how to work in groups you're going to learn about testing and all of these things that help to make a complex system actually manageable and hopefully eventually workable okay so so part of making things work are interfaces so here's a 61c view maybe of things the hardware software interface so you have hardware that's these bricks and they got software which might be a program in 61c which hopefully will start coming back to you very rapidly you had a processor and you had memory which had the os in it maybe you had uh registers in the processor and those registers pointed at parts of memory okay and that allowed this program to run uh maybe you had caches we'll we'll learn about caches again and remember mostly remind you how they work uh which help to make the slow memory look fast okay the way i like to think about uh a system with caches is you want to make it as fast as the smallest item like the registers and as large as the largest item like the memory or disk and the way you do that is with uh caches okay and of course there's page tables and tlbs which will help us out in virtual memory and there's storage disk drives etc there's all sorts of devices like networks and displays and inputs and so making all of this tie together is something you started down the path with 61c hopefully you remember that um and then of course there's interesting things like buses that tie it all together okay and i you know 61c doesn't quite get into that level of detail and we're not going to do that too much i might suggest 152 and 151 some of those interesting classes uh if you really want to talk about the maybe 150 if you want to talk about the buses and so on but then of course there's an instruction set architecture which you did talk about and that abstracts away a lot of what's going on in the processor so that people running programs and compilers that are compiling programs have something common to use okay and so what you learned in 61c was machine structures and you also learned c which you're going to get to exploit a lot so i know the notion that in 61c you learn c is maybe a shared with a little bit of skepticism by people but um you're going to get to learn it a lot more in this class so the os abstracts the hardware details from the application so not just the instruction set architecture is going to matter anymore so that abstracts away the computation elements of the processor but we're going to learn how to turn a bunch of storage devices like disks and usb keys and um cloud storage and turn it into a single abstraction like say a file system so that a user can use that easily without having to worry about where the bits are stored okay and so that's where we go with this classes we're going to learn not just about the abstractions from hardware for 61c but ups and processor but abstractions uh for other devices as well okay so what is an operating system so let's go through some things it does again let's try to maybe get an idea operationally so one thing that i've started to talk about here is the fact that the operating system is an illusionist in some sense all right it's going to provide clean easy to use abstractions of physical resources and it's going to do so in a way that allow you to at least temporarily think that you've got infinite memory you have a machine entirely dedicated to you or a processor that there are higher level objects like files and users and messages even though as you probably already know but will know very well by the end of the term there aren't files or files are an abstraction of a bunch of individual blocks on a disk that somehow are put together with inodes to give you a file so the operating system is busy providing an illusion of a much more usable machine so that when you program it you have a much easier time of it and you don't have to worry so much about whether it's on disk or on a usb key or in cloud storage okay and we're going to learn also about abstractions of users and messages and we're going to talk about virtualization and how to take the limitations of a system and hide them in a way that makes it easy to program okay so for instance so um virtualizing the machine so here's our 61c machine which has a processor it's got memory it's got i o with maybe storage and networks um and on top of it we're going to put this operating system thing which we're learning about as we as we speak and that operating system instead of giving us a processor with uh limitations the processor has it's you know it's got a certain um set of registers it's got uh certain floating point operations it uh has uh certain um exceptions that are caused and so on we're going to give an abstraction of something really clean called threads okay um we're going to have address spaces uh for instance we're going to learn about rather than a bunch of memory bytes that are in dram and scattered about we're going to provide a nice clean address space abstraction that will uh give us the ability to treat the memory as if it's entirely ours even when there's multiple programs running again i just talked about files rather than a bunch of individual blocks we're going to have files and rather than networks which are a bunch of individual ethernet cards let's say that are connected point-to-point between here and beijing we're going to have sockets and routing under the covers okay so that's a pretty clean abstraction which of course ultimately allows me to teach uh you guys spread all over the globe as you are okay on top of this these threads address spaces files and sockets are going to be the process abstraction and that process abstraction is going to give us an execution environment with restricted rights provided by the operating system and that process abstraction is going to be a nice virtual machine that your program can run in that's uh abstracted away from all of these physical details okay and so on top of that you could have your program so the one thing that you guys get to do a lot more of than you've done so far in your career is you get to actually do uh user-level programs running on top of a unix environment okay and so um you're gonna have compiled programs that you have produced that are going to run on top of your process abstraction and in order to give you a clean environment into the process abstraction there'll be system libraries so there's even a system something the c library the security libraries many of the libraries abstract even further and give you nice clean abstractions that maybe allow you to do ssl very easily or so on okay there is an interesting question uh in the chat which i'm going to point out some people are asking about closed captioning some classes like last term we even had closed captioning but that's when we need it and we actually have a live captioner in that case unfortunately we don't but what i will do uh when i put the um videos up is uh they will get automatically closed captioned by uh by youtube when i put them on there and so that'll be something but they won't be live sorry about that um so this is our virtualized machine view and the applications machine is the process abstraction provided by the os and some people might argue including the system libraries and each running program runs in its own process and the process gives you a very nice interface nicer than hardware now the question here on the on the chat here is is the hypervisor or docker demon a part of the process acting on as the top layer of the vm so uh we will talk a little bit later in the term about docker docker is a way of wrapping up uh multiple different little environments and potentially running them inside the process abstraction it's not as uh isolated as say a full virtual machine but we'll talk we'll talk more about that in detail let's stick with process abstractions for now um the process abstraction is i'll show you in a second you can have multiple processes all running uh at the same time and they're each given isolation from each other so that's what we're going to start with uh for this first lecture um so uh the system uh isa by the way stands for instruction set architecture that was a question so uh the system libraries um what does a system programmer think of well the system libraries are linked into your program which is then run by a compiler and turned into bits that will run in the process you're going to get very uh good at this as well as you're going to learn how to compile programs link them with libraries and then execute them in a process environment and you'll learn how to invoke the compiler to do that so um this is the programmer's view so what what's in a process so remember the process here is an environment right that gives you threads address spaces file sockets so um a process as i said has an address space which is a chunk of protected memory it has one or more threads in it uh one or more threads of control executing in that address space and uh the system state associated with open files and sockets and so on and so this is a completely isolated environment we're going to dive into processes very quickly in this class and you're going to learn how we can have a protected address space and multiple threads running in an environment that's protected from other processes even though for instance maybe there's only one core running we're going to give the illusion that there's multiple cores running with multiple processes at the same time so um you've all done this you know here's an example on on say a mac uh where you look at the process monitor or the task manager or you do a psa ux on linux box and what you see here which is uh perhaps surprising if you haven't really thought about it is that um there are many processes running all the time on your typical laptops okay so many things going simultaneously 50 or 100 of them mostly they're sleeping but they're there to wake up and do some execution at some point okay now the question of why are the middle layers of abstraction necessary so part of the reason that we have many layers of abstraction is that if you try to squash all the layers down which is sometimes done in very specialized environments you end up with an undebugable mess okay and so multiple abstractions assuming they don't make things too slow are a crucial aspect to making things actually work properly okay and so you'll see even modern uh operating systems still have several abstraction layers okay and you'll you'll appreciate them i think as we go forward um because it's much easier to actually have an operating system that has a device driver talking to the disk and then you have a file system that provides files and then you have a process abstraction which protects those files and exports them to programming and yes somebody brought up the imagine programming in ones and zeros i can say that i've done that and it's not pleasant but anyway moving moving on here so here's the operating systems view of the world when there are multiple processes so each process gets its own set of threads and address spaces and files and sockets okay and they might run a program with its own linked libraries okay but what's interesting about this point of view is these processes are actually protected from each other okay so the operating system translates from the hardware interface down below to the application interface and each program gets its own process which is a protected uh environment all right and so in addition to illusionist we're going to talk about another thing that operating systems do which is referee which is manage the protection isolation and sharing of resources and this is going to become particularly important when we talk about global scale systems you can imagine we talk about storage that spans the globe with many uh individual operating systems running at the same time each of which could be corrupted in one way or another you kind of get to the interesting question of well how do you protect anything and this is where the referee point comes into play and so here i'm going to show you we're going to now be more consistent with our coloring for what's going forward but here we have compile program number one and number two each of them are linked with system libraries you're going to learn about the c library uh very shortly like i said and they are running independent of each other and however in this simple example there's only one processor okay so that one processor and one core by the way for before somebody asked where say one processor one core um and how can these two things appear to be running at the same time well we start out with one of them running so the brown one's running it's got it's using the processor registers it's got a process descriptor and thread descriptor and memory you'll learn about those as well and it's busy getting cpu time okay the green process is not running but it is protected okay and so now how do we get the illusion that there's more than one processor or that each process has its own processor well we uh each process has its own process descriptor in memory and then the operating system has to have some protected memory as well and what we're going to do periodically is we're going to switch from brown to green and vice versa okay so here's the example of going from brown to green so the brown device has this process descriptor here the green one has the other the green one and what we do is we go through a process switch where the registers are stored uh through the os into their own process descriptor block and then the green ones are reloaded and what happens is voila the registers are now pointing at the green memory and the green one picks up from exactly where it left off okay and then a little bit later our timer is going to go off and we're going to switch back the other way and if we do this frequently enough you get the illusion that multiple processes are running at the same time and uh we're going to talk about this how this works in detail so um i can uh very confidently say that in a few weeks you will have a very good idea of how this works so but at the high level it's very simple we're just switching the processor back and forth between brown and green and as a result we get the illusion that they're both running and notice that what do i mean by uh the illusion well the process one can pretend like it's got 100 of the processor and process two can pretend it's got 100 of the processor and things just work out okay and that's up to the operating system now the question that's interesting here and does a program become a process when loaded into memory a program becomes a process that's a very good question for uh next week but when a program becomes a process when the binary has been loaded into memory and into the proper os structures so it has to have a a process structure allocated for it and it has to be put into the scheduler queue and so on once that's happened now that process is an instantiation of a running program so going a little further to that question that was there both brown and green could actually be the same program running in different instances with different state so we could have uh we could have one program two processes each of them doing something different and this is uh typically what would happen if you were logged into a shared machine and you were both say editing with emacs or vi uh each of you would have your own state okay so um and then the interesting thing about shared data we'll get to um in a little bit uh next week probably but uh yes so you guys are way ahead of me so that's good so now the question about i will say one uh answer this question here about what does it mean when a process is some percent of the cpu that literally means what it says if process 1 has 90 of the cpu and process two has ten it means that uh if you were to look from uh ten thousand feet you would look down and you see that process one gets the cpu ninety percent of the time and process two gets a ten percent of the time okay and mostly what you're going to see is that there might be one thing that's getting most of the cpu and the rest of them are getting very little of it and that's because they're mostly sleeping or waiting on io typically but if you look carefully and you uh you add everything up you'll actually get a hundred percent okay but that oftentimes if uh something's mostly idle most of that time comes up as the uh the idle process which we'll talk more about too okay so let's talk briefly about protection so um here we have brown and green um but i said they were protected from each other so what happens if process two reaches up and shows uh tries to access brown's memory or tries to access the operating system or tries to access storage which is owned by some other user what happens is protection kicks in the operating system and voila we uh we basically give that process the boot and typically cause a segmentation uh fault dump core and uh the green process is stopped now uh the question about more than a hundred percent uh is an interesting one it really depends on how the statistics are reported uh if you have multiple cores you have say four cores uh in one view of the world you could have up to four hundred percent execution um in another you could say uh only if you use all four cores you get 100 so you have to be very careful about what the reporting statistics are because i've seen them both ways okay but if you have more than 100 then you definitely have it reporting uh multiple cores where each core is 100 okay so uh does one cpu equal one core i'm going to say yes uh for now um and just know that that's not the whole story we'll we'll go a little further for now but for now today you can certainly think of one cpu equal one core for this lecture absolutely um one cpu chip often has many cores and so we're not gonna go there today but we're gonna go there so um this protection idea is really the os synthesizes a protection boundary which protects the processes running on top of the virtualization from the hardware and prevents those processes from doing things that we've deemed not correct that are not part of the protection okay and the virtual memory uh which we're going to talk about as we go is exactly what i just said here so that i didn't talk about this in terms of virtual memory but one of the reasons that the green process isn't able to reach out and touch the brown memory is that virtual memory prevents it um but this uh reaching out to memory you're not supposed to have access to is is uh can be shown you know reaching out past the boundaries of what the operating system has mapped for you in virtual memory as well so think of today's lecture as giving you some of the ideas at the high level which we're going to drill down to in a couple of lectures so this protection boundary is again part of the virtual machine okay abstraction somehow we've got these networks which have little packets with mtus that are 200 bytes and what have you we've got storage which is a bunch of blocks we got you know controllers which do a bunch of complicated stuff you as a programmer don't want to think about the net about the individual hardware because if you had to do that you'd be uh you know you wouldn't be getting anything done and so part of what the os does is it really puts these protection boundaries in gives you a clean virtualization precisely so you can program without thinking about those things and you can program without worrying about somebody else trying to hack in as well so that's the idea there's an interesting question on the chat here about whether the java virtual machine would be an os and yes there are points of view in which uh the java virtual machine could be considered an os so um let's save that question for another day but bring it back if it looks like we're going that somewhere where that's appropriate so the os isolates processes from each other it isolates itself from other processes and even though they're all running on the same hardware um so that's an interesting challenge which we're going to tell you how it works okay so finally the operating system has a bunch of glue that it provides which are common services so you may not have thought it this way but um if you have a good operating system it's going to give you a file system so you're going to get a storage abstraction or it's going to give you windows and that properly uh take in mice mouse clicks and so on or it's going to give you a networking system that can talk from berkeley to beijing and back without worrying about packets okay and so these common services are actually typically linked in with libraries and those libraries are things that you come to to uh depend on when you're writing a program so really an operating system if you were to look at its functionality referee illusionist glue all of these things are part of what an operating system might be considered doing uh what gets interesting when you set up non-mainstream operating systems like uh if i don't run out of time i'll briefly talk about the martian rover uh for instance um you might try having stripped down versions not as much uh functionality to try to run on simpler hardware uh or in a less malicious environment where there might not be somebody hacking in and so many times people build specialized operating systems which perhaps don't have all the protection internally or maybe they don't have all the storage services that they might that you might see here etc and that's doesn't make it any less an operating system it makes a more directed operating system at a particular task so so finally um the os some of the basics are i o and um the uh clearly i've just said kind of that we're providing the ability for storage and networks to have a nice clean abstraction into the hardware that we can deal with okay and that's the common services um so uh there was a question here about flipping transistors and heat i tell you what i promise uh as a computer architect to talk about that in a few lectures for you if that's interesting um uh is there a smallest os well uh there was something that david culler put together in the early 2000s called tiny os which is pretty small okay so finally uh the os maybe gives you some look and feel so uh maybe you have display services there is an interesting point uh back to what i talked about earlier in the lecture here is windowing part of the operating system is the browser part of the operating system well perhaps depends on what operating system so for instance microsoft windows went through a phase the windows nt initially had was a micro kernel type operating system and the windowing system was outside of the kernel and then they decided they weren't getting enough performance and so they went the opposite direction and put the windowing entirely inside of the kernel which is almost like a a reactionary response and so you could have windowing both in and out of the kernel and the distinctions there have to do with protection security durability reliability some of those questions come up and hopefully you'll have enough to judge where you think it belongs as we get further into the lecture or further into the class um and then finally we got to deal with power management and some of these things which only really show up on portable devices but these are all potentially managed by the os so so what's an operating system referee referee illusionist glue many different possibilities so why should you take 61c well other than being one of the best classes in the department if i do say so myself some of you uh will likely uh i said cs i said 61c i'm at 162. my apologies boy i'm slipping up here tonight but some of you are actually going to uh design and build operating systems so by the way just to be clear i was saying that cs162 is one of the best classes but you shouldn't quote me on that i'll get in trouble but some of you may actually uh design and build operating systems uh in the future and it'd be very useful for you to understand them okay uh many of you will create systems that utilize core concepts and operating systems so this is uh more of you uh it doesn't matter whether you build software or hardware or you start a company or a startup the concepts you lose that you uh basically use in 162 are ones that are going to go across very easily to many of these different future tasks that you're going to do and so you're going to learn about scheduling and uh well you could schedule in the hardware if you're designing processors you can schedule in the lower levels of the os if you're building a core os you could schedule uh in a big cloud system if you're building cloud apps and so the ideas that we learn here actually go across to many different places and we'll even talk about some cloud scheduling as we get a little later in the term um all of you are going to build apps i guarantee it as you go forward okay and you're going to use utilize the operating system and so the more you understand about what's going on the more likely you are to a not do something that uh was not a smart thing to do hopefully you'll learn about locking you'll learn about concurrency you'll learn enough about the right way to design some of these systems that you're going to write amazing bug free software as opposed to almost amazing very buggy software okay um so who am i so my name is john kubatowicz most people call me professor kuby maybe because they can't pronounce my last name but i have background in hardware design so i did there's a chip i designed for my phd work which was one of the first shared memory multi-processors that also did message passing called alewife i have backgrounds in operating systems i worked for project athena at mit um as an os developer device drivers and network file systems worked on clustered high availability systems we had a project uh for a while in the par lab called tessellation which was a new operating system we were developing for multi-core processors i did a lot of work in peer-to-peer systems so the ocean store project this was our logo here of the scuba diving monkey um yeah i was addressing the idea of storing data for thousands of years um and we were pretty much one of the first cloud storage projects before anybody talked about the cloud back in the early 2000s and so some of the concepts i talk about at the end of the term will come from some of those ideas um i also do some quantum computing um and uh perhaps you could get me to talk about that at some point but it's a little off topic for this class and most recently i've been uh working in the internet of things or the swarm specifically i have a project called the global data plane which is looking at uh hardened data containers we like to use the analogy of these shipping containers that everybody sees down at the port of oakland uh where these shipping containers are cryptographically hardened containers of data that can be moved around to the edge devices and back into the uh back into the cloud and um are ideal for edge computing and so we'll talk some about some of these ideas as well and if any of you are interested in um doing research in that that's certainly something you could talk to me about all right um and uh i will say that quantum computing is a real thing becoming more real as we go it's got to be real because google and ibm talk about it all the time now so um that's a little bit of a joke but uh we have a great set of tas this term um and uh neil uh kulkarni and akshat gokoli are co-head tas and um and we have a set of really good tas and so i'm very excited about our staff and uh i will tell you a little bit about where we're at in terms of scheduling sections we haven't um the sections are still tba and i'll say a little bit more about why that is in a second um okay so um let's talk a little bit about enrollment uh the class has a limit of 428 i just raised it and it's not going to go any higher so um probably won't make the class any larger uh there's one circumstance where that might happen but i think it's unlikely at this point um this is an early so um i will say something here so uh running a class virtually in the middle of a pandemic especially something like cs162 is a serious challenge and so um what we're doing is you're going to have a a pretty good i would say an excellent ratio of students to tas this term and uh and that's to make sure that things all be um smoothly running okay um and so probably won't make the class any larger um the other thing to keep in mind is this is an early drop class okay so september 4th which is a week from friday is the drop deadline and what an early drop class means is it's really hard to drop afterwards okay so the next two weeks you need to make sure that uh you want to be in the class okay because if you are still in the class and you get past that early drop deadline um you either have to burn your one uh special drop late uh token that you get as a student or there's some appeals process that doesn't always work so um so the early drop deadline is really there to make sure that when you guys start working in groups it's going to be stable okay we put we instituted that because what would happen is people would form their groups and students who weren't entirely serious about the class ended up dropping out on their project partners and that got to be a problem so what we need to do in the next two weeks is everybody needs to make sure they want to be in the class and if you don't you should drop early so that people could get in because we currently have a wait list that's uh was 75 or so the last i checked the uh other thing which i'm gonna say more about in a moment but we're very serious about requiring cameras okay for discussion sessions for uh design reviews and even for office hours okay and we're going to certainly use them for uh exams so if you don't have a camera yet you need to find one the only place in this class where you're not going to want to turn on your camera is lecture because having we currently have 328 people on the the chat there um and so that would be bad um i think with the wi-fi uh issues people are asking about let's just do your best okay zoom tries to adjust a little bit and we'll we'll deal with problems on a on a case-by-case basis but i'm going to tell you more about this in a moment but really having a class like this all virtual is very hard unless people interact a little more normally and so that's really requiring people to be able to see you okay um if you're on the wait list uh like i said earlier we kind of maxed out sections in ta support so if people drop uh they're gonna we're gonna automatically move people from the the uh waitlist into the class so here's the thing you should def absolutely not do and if you have friends who are uh you know we're just on the class and are thinking they're not going to take the class make sure that they either get themselves off the wait list or they do all the work in the class because uh as i'm going to mention a little bit if you're still on the wait list and a spot opens up we will enroll you in the class and you'll be stuck of course with an amazing class as we mentioned earlier but if you're not keeping up that could be a problem if you because we have occasionally had people discover weeks into the class that they were enrolled and uh you know couldn't get out of it so don't be one of those people okay um now uh the question about discussion sessions i'll say a little bit more about them in a moment okay but how do we deal with 162 in the age of cobit 19 well if you look at this particular uh word play here we've got collaboration in the middle we've got to remember people and we've got to figure out how to combine all of you together in your groups and produce something successful so this is challenging and i i know this is not the term you thought you were getting this fall when you uh you know when you thought about coming to berkeley and i apologize but uh most of you i think experienced the end of last semester unfortunately um but collaboration is going to be key okay so things are considerably different i would say this term even than they were last term because we're starting out fully remotely so you don't even get to see anybody in person probably um maybe some of you will get to see each other but i would bet that the bulk of you don't um most important thing is people and then interaction and collaboration so i put up something here to you all remember you know i fondly remember coffee houses this is what they kind of look like you know you sit with people and you drink beverages of choice i'm going to say coffee to get keep from getting in trouble and uh you discuss things okay so this is how groups ought to work okay and the question is how do we do this uh when people are all remote and so first of all uh we're gonna have to use it's gonna work it's gonna require work okay i hate to say this but the way we make this uh turn out well is we've got to work at our interactions because as you well know if you don't look at anybody with cameras on or whatever and you just exchange email that can go south very quickly even when you didn't intend to imply something and everybody gets their feelings hurt things are just not working out well so we've got to figure out how to bring everybody along with us so we don't uh lose anybody and if you notice here by the way these people are holding hands that's virtual so we're not suggesting that you um don't socially distance when you're bringing people along but the camera is a part of this okay so this is call this an experiment um but cameras are going to be an essential component um you got to have a camera and plan to turn it on and if you have issues with spectrum let's see if figure out ways of maybe lowering the bandwidth a little bit but um you certainly need it for exams okay so if you don't have a camera you got to make sure you got enough spectrum and a camera for the exams um and you're going to need it for discussion sessions design reviews and office hours possibly even that's going to depend on whoever's running the office hours um we uh i'll get to section this week in a moment but yes we do have section this week um but the thing about cameras is it gives the ability to at least approximate what we used to be able to do when we sat physically in person in fact i may even in fact not even me we are probably going to give extra credit points for screenshots of you and your group meeting on a regular basis drinking a beverage of choice and talking to each other okay so this is the kind of thing that needs to be strongly encouraged um even before we had a pandemic i i had groups that uh somehow despite the fact that they could be never met uh the whole term okay and this was uh got bad and by the end of the term the group uh all of the members were upset with each other they um you know the project failed and they all got bad grades and this was just a bad scenario and it didn't have to happen that way because they should have been meeting they should have been looking at each other while they were talking and it didn't happen so this is our experiment okay and so cameras are a tool uh not of the man they are a tool of collaboration okay um so we want to bring back personal interaction okay even though we're on either side of fences okay humans are really you know even computer scientists are not good at text only interaction um so uh we are going to require attendance we're going to take attendance at discussion sessions and design reviews um with the camera turned on okay so and hopefully that's clear any other questions on the camera you can uh why don't you type your questions and people turn off their mic if they're not asking a question actually uh type your questions too all right so infrastructure well it's only infrastructure you can't come see us but um we have website uh which you've probably all gone to cs162.eats.berkeley.edu that's going to be your home for a lot of information related to the course schedule we also have piazza so hopefully you all have logged into piazza already assume that piazza is the primary place where you're going to get your information i'm also going to be posting the slides early as have been asked several times on the website on the cl the class schedule and when the videos are ready they'll be posted on the class schedule as well so you'll be able to uh get everything related to the schedule on the website and then piazza is kind of everything else okay the textbook is this principles and practices of operating systems it's a very good book the suggested readings are actually in the schedule and so you try to keep up with the material you can get a red version on text of what i talk about and i think those two together help a lot there are also some optional things you could look at so there's uh i know david culler really liked this operating system three easy pieces book the linux kernel development book some of these are interesting maybe to look at as a as a supplement one thing that you may not have known is if you log in with your berkeley credentials uh to the network which i think you need to use a virtual vpn to do that but you can actually get access to all of the o'reilly animal books over the network as well that's something that berkeley's negotiated with the digital library which is pretty cool and then there's online stuff okay so if you look at the course website we've got appendices of books we've got sample problems um we've got things in networking databases software engineering security all that stuff's up there old exams so the first textbook is definitely uh considered a required book you should try to get a copy even if it's only an e-book there's also some research papers that are on the resources page that i've put up there and we'll actually be talking about some research as we get later in the term so use that as a as a good resource so the syllabus uh well we're going to start talking about how to navigate as a system programmer we're going to talk about processes io networks virtual machines concurrency is going to be a big part of the early parts of this class so how do the threads work how does scheduling locks deadlock scalability fairness how's that all work we'll talk about where address spaces come from and how to make it work so we'll talk about virtual memory and how to take the mechanisms and synthesize them into interesting security policies so virtual memory address translation protection sharing we'll talk about how file systems work so uh we talk about device drivers and file objects and storage and uh block stores and naming and caching and how to get performance and all of those interesting things about file systems which you probably haven't thought about and in the last uh sort of couple weeks of the class we'll even talk about how to get the file system abstraction to uh span the globe uh in the cloud storage system so that'll be interesting we'll talk like i said about distributed systems protocols rpc nfs dhts uh we'll talk about cord we'll talk about tapestry and some of those other things um and we'll also talk about reliability and security to some pretty big extent there's a question in the chat about cloud uh systems and why they haven't really uh taken over as operating systems and i think maybe they have more than you might think i think the the cloud has really become part of our day-to-day lives and things that people call the cloud operating system maybe where they put capital t c o s or something may not have taken over but a lot of other mechanisms have been synthesized together in a way that you haven't thought about so hopefully by the end of the term we'll actually uh you'll have enough knowledge to evaluate that question for yourself as to you know what is up with the cloud and is it really a monolithic thing or is it a bunch of mechanisms and where is that at okay so we learn by doing in this class so there's uh a set of homeworks and each of them is kind of one or two weeks long there's one that you've got to get going right away which is um you need to get going on homework zero so this is one of the things that we do in the very first week it's already been released i believe and you should get moving on it and it's basically learning how to use the systems and there's also a project zero which is done individually and you should get working on there too so this class is as much about knowledge as it is about um uh actually doing things i should say that the other way it's as much about doing things it is about knowledge so you're gonna do build real systems okay and um and you're going to learn some important tools as you do that and they're either going to be done individually or they're going to be done in groups okay um there was a question about kafka and cassandra probably we'll get some concepts from them a little bit later okay so a big thing to learn about from this slide is get going on uh homework zero and and uh project zero will probably get posted soon and both of those are things to do on your own without your group so group projects have four members never five or never three okay it's four three is a very serious justification requirement um you must work in groups in the real world and so you learn how to do it here um and all of your group members have to be in the same section uh with the same ta okay and so that's why the sections that you attend and you are going to attend sections in the next couple of weeks uh are just any section you want um because we have don't have your groups yet and once we have your groups then we will uh assign you to sections and go from there and you should attend the same section and that's when the requirements for attending section will kick in and we do have a survey out on time zones and so on to try to get an idea where the best place to put some of these sections are so communication and comp cooperation are going to be essential uh regular meetings with camera turned on is going to be important you're going to do design docs uh and be in design meetings with your ta and i will tell you yes you can use slack and messenger or whatever your favorite communication is but if that's the only thing you do it's not going to be great okay you got to have your camera you got to get together and see each other the group your groups are actually going to have to be formed by i think the third week of classes it's in the schedule take a look but when we get into groups i'm going to actually have a lecture half lecture where i talk a bit about mechanisms for groups as well okay and uh sort of ways that you can cope with the typical problems that groups have and sort of what are some good uh good tools there to give you a little idea but um short answer is you've got to decide groups very shortly and we do that typically after the early drop date because at that point in theory people are stably going to be in the class and we're going to have some mechanisms to help you form groups there's going to be a piazza uh looking for a group kind of uh thread we we may even have um some zoom uh um room set up for people to sort of you know i don't know interview your group members or talk to them we have a couple of different things we've been thinking of just to try to get your groups together but keep in mind you want to have four members in your group okay not not five and three is um probably only under serious justification okay um and you're gonna be communicating with your ta who's like a supervisor in the real real world so this group thread here is very much like um what you're going to run into when you finally exit berkeley and confront the real world how do you get started well there's going to be a survey out okay so the um the the question in the chat about uh tbd yes so the group uh you uh we're assuming that many of you might not have group members yet and it's also the case that the um final discussion session times haven't been decided only for the next couple of weeks until groups are formed okay um there's going to be a time zone survey or a survey out you probably have already seen it i think it was released on piazza but you need to uh fill that out let me know where everybody is okay i want to know uh if you're in uh if you're in asia or if you're you're in europe or you're in new york or whatever okay um get going on homework zero project zero is not quite out yet but it will be very soon okay um but uh homework zero kind of gets you going on things like getting your github account and uh registration and getting your virtual machine set up and get familiar with the 162 tools and so on and how to submit to the auto grader so project is so homework zero is up and it's something to get going on right away and we will announce as soon as project zero is up it's gonna be out soon um sections on friday attend any section you want uh that we will post the zoom links if they're not already posted um very shortly and you'll get your permanent sections after we have our group set up okay so you're going to to prepare for this class you're going to have to very be very comfortable with programming and debugging c you're going to want to learn about pointers and memory management and gdb and uh much more sophisticated and large code base than 61c and so we actually have a review session on thursday um the third of september uh to learn and review quickly about c and c plus plus concepts and um just uh stay tuned we're going to get that out and uh consider going just to give you a refresher the resources page has some things to work uh look at there's some ebooks on get and see there's a programming reference that was put together by some tas a couple of terms ago and so first two sections are also about programming and debugging okay all right um the uh tentative breakdown for grading is there are three midterms there's no final the midterms are going to be zoom proctored and camera is going to be required uh just so you know please figure that as part of the class okay and so um get yourself camera um so that's about 36 percent um 36 projects 18 homework 10 participation um and uh let's see so um yes zoom proctoring projects i've already talked a lot about homeworks you've heard about a little bit um as far as the midterms are concerned um we are going to set times after we know more about where people are okay midterms are um i have we haven't entirely decided but they're either gonna be two or three hours long each okay so the other thing i want to talk about here is personal integrity which is there is an academic honor code which is a member of the uc berkeley community i act with honesty integrity and respect for others uh you guys can take a look at it i strongly suggest you look at it okay this class uh is very heavily uh collaborative between you and your group but it should not be across groups okay or across other people on homeworks so things like explaining a concept to somebody in another group is okay discussing algorithms or maybe testing strategies might be okay discussing debugging approaches or searching online for generic algorithms not force answers these are all okay okay these are not things where you're getting specific answers to your labs and homeworks sharing code or test cases with another group not okay copying or reading another group's code not okay copying or reading online code or test cases from previous years not okay helping somebody in another group to debug their code not okay so sitting down for a long session of debugging to help somebody um without you know maybe thinking you're not copying code in i'll tell you a long debug session has a tendency to to cause the code to become looking like your own code so that's not okay okay and we actually compare project submissions and we catch things like this okay we actually caught a case um once where somebody sat down and debugged with another group and helped him out and didn't do any direct copying or at least they claimed not but when it was done the code looked so close that the automatic tools caught it so don't do that and the other thing not to do is don't put a friend in a bad position by demanding that they give you their answers for homework okay we had several cases uh we've had several cases like that recently where one person was having trouble with old work and they they kind of guilted a partner or a friend into giving them an answer and that gets both of them in trouble so don't just don't do that okay do your own work and by the way to help this we're trying for the first time during the term to to not have um a curve in this class we're going to actually do an uncurved version of this we haven't put up the thresholds yet but we'll see how that works but um please just don't put your friends in bad positions by by making them give you code because they get in trouble as well and it's just not worth it and you don't learn what you could learn by actually doing the work okay it's kind of what's the point of being in the class in the first place so all right um the goal of the lecture is interaction and so lots of questions all right we already had a bunch of questions today that's great um i'm hoping that uh this continues okay um you know sometimes it may end up that we don't quite get through the topics i was hoping but uh we'll uh it's much better to have interesting questions um and what i can do in a virtual term like this is i can even have some supplemental uh extra you know 30 minutes of lecture i can post or something if we don't quite get through the stuff i thought we would so let's give this a try and see if we can make this virtual term as good or better than it would be under normal circumstances all right and again if you have more questions about uh logistics you know piazza the class website those are your two best uh places to look for information so let's finish up here in the last 10 minutes or so and ask a little bit more about what makes operating systems exciting and challenging okay this is what makes operating systems exciting okay the world is a huge distributed system we showed you the uh what people were calling the brain view earlier kind of like that of the network but the thing that's interesting about it is all the devices on there from massive clusters at one end that span the globe um down to little mems devices and iot devices and everything in between okay this is uh you know modern cars for instance have hundreds of processors in them refrigerators have processors and web browsers i mean we've got huge cloud services uh we've got cell phones little devices everywhere and all of this together is one huge system this is exciting i mean why why does this work in the first case and what's its potential okay um so you know this is why i think operating systems are so exciting because it's what makes this all work without them there would be chaos and things just wouldn't work so of course you've all heard you wouldn't be at berkeley if you hadn't many times about moore's law so the thing about moore's law which i like and i always want to mention is over the um moore's law basically says that you know for instance you get twice the transistors every 1.5 years or so um for many years although that's starting to disappear on us now but what you may not know so that's an uh an exponential curve or a straight line and a log linear curve what you may not know is gordon moore was actually asked at a conference once what he thought was going to happen in a log linear graph on the fly at the conference he put down a couple of points drew a straight line and say well this is what's going to happen uh far into the future now normally that would be uh ridiculous and laughable except he was bright which was pretty amazing okay so what what's the thing about moore's law the thing about moore's law is it allows you to make zillions of interesting devices because there's so many transistors that you can can shove into a little bit of a device of course the downside which happened back in the early 2000s was that um putting these transistors uh increasingly on chip kind of ran into problems with capacitance and power such that you weren't able to make an individual processor as fast it used to be that you could uh you know wait a few years and get twice the performance of a machine that you're currently working with somewhere around the 2000s that stopped and suddenly what did you do well suddenly people had to make multi-core processors and lots of parallelism and so you know from an operating system standpoint this is par for the course because uh you know i already showed you a huge system with billions and billions of devices and so yeah so the fact that chips have multiple cores on them is it's cool it's uh you know it's enabling of lots of stuff but it's just kind of that's the way it is and it's interesting about how we get around that complexity okay so around the 2000s we suddenly had multi-core the power density thing i think is a funny way to look at this if in 2000 if instead of basically trying to keep making the processors grow as fast in performance as they were if you had done that what would happen is we would have chips that had the the power density of a rocket nozzle um and you could imagine putting a laptop like that on your lap might be a little uncomfortable so power density capacitance a lot of things is what kind of led people to suddenly make multi-core instead of making things faster but they did that okay so by the mid-2000s we had many cores on a chip okay and so parallelism's exploited at lots of levels all right and uh somebody pointed out the the stock of intel and amd went up hugely um that's true uh but that was because they were delivering something that everybody needed which was lots of processors on a chip all right and the problem of course is as you're well aware moore's law is ending and it's not officially well it's officially over in the original growth but people are still shoving a few more transistors on there but unless there's some fundamentally new technology um we're basically going to see the end of that growth of more uh you know smaller transistors but it doesn't mean that people aren't still shoving lots of devices together and connecting them with networks it just means networks become more important okay and uh by the way vendors are moving to 3d stacked chips and all sorts of cool ways of having a single device have even more transistors on it even if moore's law is ending so um i have no doubt that things are going to continue uh quite a ways into the future the other thing is storage capacity keeps growing okay we've got uh various moore's law-like graphs of storage um society keeps getting more and more connected and so we have more devices more storage and more devices more storage more people means more need for operating systems which is why you're in the right class now our capacity keeps going up okay people need more connections okay and they're they're at the small scale and the large scale but not only pcs we have lots of little devices we've got lots of internet of things devices you saw this graph earlier i showed you but we've got little temperature sensors and fitbits and things you carry on your body and things you put in your cars and all the way up to the cloud okay so what's an operating system again it's a referee it's an illusionist it's glue that helps us build these huge interesting systems and that's what you're going to learn about this term the challenge which i'm going to kind of close with the challenge is complexity okay applications consisting of many software modules that run on many devices implemented on many different hardware platforms running different applications at the same time failing in unexpected ways under attack from malicious people leads to craziness and complexity right and it's not feasible to test software for all possible environments and uh combinations of components and so we're going to have to learn how to build these complex systems in ways that may basically work and some of that is going to be learning just how to design systems that are correct by design rather than correct by accident okay the world is parallel if you haven't gotten that by now here's an example of from 2017 the intel sky lake 28 cores uh each core has two hyper threads so it's 58 threads per chip and then you put a bunch of these chips together and you get a huge parallel system in a tiny box and you put a bunch of boxes together and pretty soon you got the world okay uh yes and 28 times two is not 58 very good so with that uh not only do we have the chips which are interesting but i want you to realize that um the processors are only part of the story it's this uh all of the i o is the interesting parts and we'll talk about that but it's not just this processor up here it's everything connected to it it's the devices it's the networks it's the storage okay so this is um interesting complexity when processing hits the real world and that's where the operating systems get involved um i thought i'd put up this graph just to leave you with a few things to think about so here is millions of lines of code um and if you look at the original linux uh not the original linux but version 2.2 which is quite a you know 15 years ago whatever at least and you look at the mars rover these are on the low end of this scale but now you kind of look at uh you know firefox and android and linux31 which is a little bit older now in windows 7 and then you get up into kind of windows vista and the facebook system itself and mac os and then you look at mouse base pairs here um that's a genetic thing that's 120 million things you can see that uh our systems are very complicated okay and so um you can go by the way to this source and get yours you know select the things you want and look at this yourself okay this information is beautiful.net visualizations a million lines of code it's kind of fun to look at okay so uh you know the math the mars rover here it is is a very amazing one of you know there's been a couple of instances of the rover but this particular one one of the first ones was pretty amazing they were able to to send it up and landed on mars and it ran for a decade or more um it had very limited uh processing it's 20 megahertz processors and 128 megabytes of dram and so on and had a real-time operating system but for instance you can't hit the reset button or you can't debug it very easily and however they were able to set it up in a situation where they could they figured out some timing problems they had and they were able to debug it and repair it remotely which is pretty amazing and i'll talk more about that as we go but you need an operating system on something like this because you perhaps don't want it to run into a ditch while it's busy taking scientific data or whatever okay and so very similar kind of to the internet of things in its size so this kind of processing is uh par for the course for really tiny devices and so we're going to talk about this kind of device in addition to the really big ones as we go okay so some questions to end with does a programmer need to write a single program that performs many independent activities and deal with all the hardware does it have to does every program have to be altered for every environment does a faulty program crash everything does every program have access to all hardware hopefully the answer to this is no and we'll learn as the term goes on and operating systems basically help the programmer write robust programs so in conclusion to end today's lecture operating systems are providing a convenient abstraction to handle diverse hardware convenience protection reliability all obtained in creating this illusion for the programmer they coordinate resources protect users from each other and there's a few critical hardware mechanisms like virtual memory uh which we briefly brought up which we'll talk about that help us with that it simplifies application development with standard services and gives you fault containment full tolerance and fault recovery so cs162 combines things from all of these uh areas and many other areas of computer science so we'll talk about languages and data structures and hardware and algorithms as we go and i'm looking forward to this term i hope you guys uh all are having a good first week of class and we will see you on monday all right ciao |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_12_Scheduling_3_Deadlock.txt | okay welcome back to 162 everybody um we are going to do the third lecture we have uh on scheduling today and um i definitely encourage you to catch up on the other lectures if you've gotten behind since the midterm the one of the things we did talk about uh last time which i wanted to remind you of was real time scheduling and normally when you hear about scheduling in an operating systems class uh you often just hear about sort of the standard performance uh sensitive or latency sensitive um you know responsiveness and fairness sensitive scheduling algorithms but i always like to talk a little bit about real time because real time is different in that um predictability is important so rather than what you typically worry about in scheduling here it's far more important to be predictable than even to be fast okay because you want to predict with confidence the worst case response time and um in real time scheduling uh performance guarantees are often given uh per each task you're sort of guaranteed a given deadline uh will be met and um the way you get that guarantee is you have to give the scheduler information about what your worst case uh scheduling time uh worst case computation time might be um and it could conventional systems we talk about performance and you know throughput is important okay and so real time is really about enforcing predictability and it's important because for instance talking about things like uh hard real time might show up if you're worried about um physical world scenarios how long between when you press the brake on a car and when the uh brakes actually engage that might be a real-time problem and it's very important that you meet a deadline there because if you don't then the user might crash now there is a discussion here about gpu scheduling we probably won't talk about that we're mostly talking about regular uh cpu scheduling for now the the thing about hard real-time scheduling again is it's really important to meet the deadline and this can be a situation where if you don't meet deadlines maybe the car crashes or uh you have a system that's uh in a hospital and maybe the patient dies if the real time scheduling's not met and we even introduced something called earliest deadline first scheduling last time which is a very common one for doing real-time scheduling um we also sort of distinguish between hard and soft real time the key thing about hard real time is it's crucial that you actually meet the deadlines and you assume that you don't want to miss any deadlines whereas soft real time is a situation where you want to meet deadlines with high probability and typically might be in something like multimedia servers or whatever and they're something that like the constant bandwidth server cbs which we didn't talk about last time is a variant of earliest deadline first for multimedia all right the other thing that we were talking about that i wanted to mention was stride scheduling stride scheduling is uh something that we talked about after we talked about lottery scheduling and this was the notion of achieving a proportional share of uh scheduling without resorting to the type of randomness we talked about in the lottery scheduling and thereby sort of uh overcoming the law of small numbers problem where um lottery scheduling really only comes out when you uh have long enough tasks that you can can meet that law of small numbers uh law of large numbers basically stabilizes it the stride of each job you could think of is something like you have a large number divided by the number of tickets and uh for instance w might be 10 000 and perhaps task a has 100 tickets b has 50 c has 250 and in those instances basically uh the strides are for instance 100 for a 200 for b and uh 40 for c and uh what is that stride we talked briefly about the fact that sort of every time you get to schedule and run for your time slice you uh you add your stride to your counter and those tasks with the smallest accumulated stride are the ones that get to run and so as you can imagine the low stride jobs with lots of tickets run more often and this is starting to get a way of applying fair queuing to scheduling and and basically thereby giving a proportional fraction of the cpu okay and really what i talked about a little bit too quickly at the end of lecture because we ran out of time and i wanted to repeat here for everybody was this notion of the linux uh completely fair scheduler or cfs and this is uh actually in use you're probably using it if you have a linux box and the goal here is that each process gets an equal share of the cpu so rather than talking about priority scheduling or or uh talking about round robin scheduling or some of the other ones we were talking about which don't tie the schedule directly to the cpu cfs like stride scheduling ties the amount of execution time you get to the cpu and so as a simple example we'll get to more complicated ones in a second here the idea here that n threads are running simultaneously you have this model as if the cpu were subdivided into n pieces and somehow we were able to get n pieces of the cpu to each of the n threads and if you could somehow do that then the threads would run uh at exactly one over nth of the time and they'd all get an equal fraction of the cpu and everybody would be happy okay and so the model is something like simultaneous multi-threading or hyper threading where each thread gets one over n of the cycles of course in general you can't do this uh yeah hyper threading maybe lets you do that a little bit with one or two threads but certainly not a big n and so what we need to do is figure out how to approximate this idea that every thread gets one over nth of the cpu but without having that ability to really subdivide those cycles and so of course the operating system gives out full speed cycles and so we have to use something other than the some way of keeping the threads in sync so they sort of get on average one over n and that's really the basic idea here which is we're going to track cpu time per thread and schedule the threads to match up an average rate of execution and so you could look at this i mean in this previous figure what i had here was 1 over n of the cycles are given to each thread and so they all kind of progress at the same time okay in this newer idea here the threads of course when they're running are running fast they're faster than one over nth of the speed but we don't run them all the time and if we when we stop we can take a look at thread one two and three and notice uh to make a scheduling decision here that thread two is behind in its average amount of cycles and so we'll choose to schedule that one next and so it's uh we've sort of keep the the heads of the threads uh running at the same speed on average so we choose the thread here with the minimum cpu time total and this is very closely related to fair cueing as a general idea if you're familiar with that from networking okay and uh if we do that so what we're just to be clear what we're doing is whenever the thread gets to run we're counting its total cycles and then when we stop uh we put it back in the scheduling heap and then we pick the thread that has the the least number of cycles so far and we keep doing this in a way so that on average we get the same uh rate of execution between all of the threads okay and you could imagine if you know if you remember your 61b um ideas is that we probably want to keep like scheduling queue so we just put the threads in the heap and then the one at the top is the lowest uh has the lowest number of uh total cycles and it's the one that we schedule for next all right questions so this is we're going after rate of execution here rather than those other metrics that we were going at before like uh you know letting it just run for a little while and then switching it out after time expires and i'll show you in a moment how we can now use this to give us something like priorities but in a way that still maintains this notion of rate of execution rather than strict priorities okay questions okay so sleeping threads of course don't advance their cpu time so what's interesting about this is that when they wake up and they're ready to run they're way behind and so they get selected to execute first and as a result we get this interactivity idea automatically uh think of this in in contrast to the o1 scheduler uh and the cfs has any pr concept of priority yes just give me a second i'll get to that but if you remember the o1 uh scheduler the idea was we had some really complicated heuristics uh that would adjust priorities based on how much interactivity we thought or how short the burst time seemed to be to try to make sure that things that had really short burst times and might be likely to be um interactive tasks would get higher priority and get to run as soon as they became runnable here we get this automatically just because we're trying to give the same rate to everybody and if a threat is sleeping it's not achieving its rate so when it wakes up suddenly it's got the cpu okay so this is beginnings of why this was so appealing and why basically linus and others uh completely threw out the o1 scheduler for cfs because o1 had gotten way too complicated so um so cfs has some nice properties uh to it but we we still want to worry about a few things we talked about for instance starvation last time and responsiveness and so in addition to trying to be fair about the rate of execution we certainly want low response time to make sure that no thread left behind right and so starvation freedom might be another way to look at that and so we want to make sure everybody gets to run at least a little bit uh if you recall when we were talking about multi-level queuing there was this worry that the q the uh thread sitting at the very bottom there in the lowest level q might never get to run so what cfs does is it actually makes sure that everybody gets to run a little bit and so it has something called the target latency which is a period of time over which every process gets to run a little bit okay and so call the quanta target latency over n in this case that means that we make sure that every thread runs one over nth of the time and that makes sure from a time standpoint we still have uh the ability to [Music] you know be sure we're going to run now so far it sounds like we're moving our way back into round robin but just hold off as soon as we get to priorities you'll see how this is fundamentally different so for instance a target latency of 20 milliseconds is not out of the question for cfs if you got four processes running then each process gets filled five milliseconds time slice okay and the problem that you might think here is if we have a 20 millisecond target latency but 200 processes then this all falls apart and so cfs does have some outs um call it uh a way to get by this high overhead case all right and that's going to be that we're going to have a minimum quanta time we never want our overhead to get so high that for instance 0.1 milliseconds is essentially what i told you a context switch time can be in some circumstances so it would be really bad if we switched every contact switch time okay and so that's basically a throughput metric and so cfs has something called minimum granularity which is the minimum length of any time slice and so the target latency uh 20 milliseconds minimum granularity is one millisecond that says in this case of 200 processes uh we basically don't run anybody any shorter than a millisecond and so when you're when you have so many things running that you hit the minimum granularity that's typically when the properties of css the cfs uh start to fall apart a little okay but just so you know there is this minimum granularity piece as well okay priorities now um as those of you have used linux recently it still has priorities i wanted to tell you about what priority in unix typically is and that's the nice value so the inner dust the industrial operating systems in the 60s and 70s gave you an actual priority that you could set directly when berkeley unix was kind of working on priority they decided to call this nice instead niceness okay and so when we were talking about the o1 scheduler we mentioned the fact that there were 40 priorities for the user those are actually called nice values and they range from minus 20 to 19 there's 40 of them in there and negative values are not nice uh positive values are nice and something that's more negative than another one gets higher priority okay so even if you were to look at priority 19 versus 18 the thing with the nice value of 18 is running with a slightly higher priority okay and um so for instance what you would do is you could start a job and then you could run nice on it and uh so if you wanted to let your friends get a little more time you might do nice on your job and that would raise the niceness value and however only the root user is allowed to lower nice uh the regular users are allowed to raise the nice values now uh as i mentioned the scheduler puts higher nice values or lower priority to sleep more and in the o1 schedule this actually translated fairly directly to priority if you remember i showed you that there were 140 total priorities in the 01 scheduler the the uh the highest hundred of them were for what was called real time scheduling okay and the uh the lower 40 were for these nice values okay but how does this translate to cfs so cfs was a drop-in replacement for the o1 scheduler so clearly there was some notion of niceness that must have been there and priority is certainly useful because certain certain things are higher priority than others but this idea that cfs is a fair queuing type scheduler says that there must be something um a little different here and because this is not strict priority and so the idea here is that you're going to change the rate of execution based on priority you're not going to say that higher priority always runs over lower priority but instead higher priority has a higher rate of execution than lower priority okay so how does this work so cfs as i've shown you so far isn't really all that different from round robin okay because i kind of said you know you get one over nth of the cpu you get a quantum that's uh one over nth of the target latency and so it sounds like i just renamed round robin but in fact i didn't okay what i did was um i only showed you the uninteresting case where everybody has the same priority but what if we want to give more cpu to some and less than others what we're going to do is change the rate okay and so we're going to use weights for that so what i showed you earlier was one in which the basic uh quanta was uh everybody got target latency time over n and that was the this basic equal shares okay a weighted share is something where every thread has a weight and then what we do is we take the current weight divided by the sum of all the weights to find out what fraction of the total weights the thread has times target latency and that tells me my quanta okay so now i'm adjusting the time that i'm allowed to run based on a base to some extent on target latency and we're going to reuse the nice values to reflect the share rather than the priority so cfs uses nice values to scale the weights but it does so exponentially okay now this looks messy but it's not bad so just hear me out so the idea is that the weight is a 1024 divided by 1.25 to the nice value so what does this say this says that positive nice values uh have lower weights than negative nice values okay so a high weight basically um is going to be something that uh has a so as you see here so high nice value has a low weight and a low nice value has a high weight okay and so two cpu tests separated by nice value of five which you find is the one with the lower nice value has three times the weight of the one with the higher and that's doesn't matter where it is so if you have 19 versus 14 or zero versus minus five you're still going to get the the same proportional difference there okay and now we're going to use virtual runtime instead of cpu time okay and the virtual runtime why 1024 because they did the the thing to realize here is it doesn't matter what the number 1024 is we could put any number we want here because the only way we use it is uh with the same uh number in the numerator and denominator okay and so um yeah this is more about wanting integers than it is about anything else okay but the thousand twenty-fours end up cancelling out in this weighted share number so the actual number is more of a uh convenience for the number of bits you have in your weight than anything else all right now um so just to give some of these numbers uh to you just for the heck of it if you have a target latency of 20 milliseconds minimum granularity of a millisecond and you have two cpu bound threads which are always running then a um might have a weight of uh one and b might have a weight of four how does that work well with the target what latency of 20 milliseconds then these two weights which would come from that exponential factor i gave you earlier means that the time slice for a might be 4 and for b might be 16. so notice b has a bigger time slice than a okay and they're in the ratio 1 to 4. now so let's go back now to the fair cueing aspect okay so fair cueing how do we fit the rate of execution back into the picture here because so far we're talking about time of execution but that isn't rate so to fit the rate of execution back in the picture what we want is we want somehow to give a slightly faster cpu to things with higher weight okay and so if you look here um uh here's an example where we want to give say more time to the higher weight than the lower weight one but to do our our cfs scheduling what we're going to do is we want to schedule this in a way that we can put everybody in the same heap no matter what their weight is and always pick the one that has the lowest uh amount of time so far and so we do is we schedule virtual time instead of real time so this is kind of cool so listen to me for a sec here so what you do is um for higher weight the virtual run time increases more slowly and for lower weight it increases more quickly so if you think about that a higher weight i let the cpu run for a second but i only will look at say a quarter of a second worth of virtual time whereas for the lower weight one here if i run for a second i will register a second of virtual time if i put those together into my virtual cpu and i make sure that virtual time always advances at the same rate then voila now the ones with the higher weights get to run more time than the ones with the lower rates and it does it in this very simple scheduling uh idea here where i just want to make sure every thread has the same virtual cpu time okay so scheduler's decisions are based on virtual cpu time it turns out you you take the amount of time you just ran when you give up the cpu you divide it by your weight and you register that virtual time and then you put yourself back in the heap and uh turns out they use a red black tree uh to do this uh which is a convenient heap that i'm sure you've learned about it basically you can always find the next thread to run in o1 because it's at the the top of the heap and then you run it for a while and you do this same trick okay and now by the way the question here that's in the chat is does this assume that um every process has only one thread so this scheduling decision is based per thread not per process right now okay and if you wanted to have some tie to the processes then what you would do is you would adjust uh their total weights to reflect that okay so you can you can basically scale their weights to do some process-based scheduling if that's what you were desiring all right now the in a contrast to the o1 scheduler which every task that the o1 scheduler was doing was uh independent of the number of threads because we're using a heap the scheduling time here is order log n but log n isn't too bad and the net result though is this incredibly simple schedule okay so notice that priorities are reflected by um a greater fraction of the cpu cycles or a greater rate of execution that thing about interactivity just happens because when you go to sleep and you wake up your virtual time is behind and so you get to run right away and so all of the really complex heuristics that were in the o1 scheduler have been replaced by this very simple idea of scheduling virtual time all right any questions so by the way if a thread schedule spawns too many or if a process spawns too many threads then then the operating system can make a decision about whether to shut them down or not okay questions so this is a fair queuing with execution rate mechanism of scheduling all right so just to close out this scheduling idea so i wanted to go through the cfs in a little bit more detail because this is um an actual scheduler that you're probably using now that works pretty well it's not a real-time scheduler because it's working with rates of execution so if you want a real-time scheduler you could install for instance uh you know earliest deadline first scheduler on linux and you could use that um so is there a cap on how much interactivity boost that a long-running thread can have yes so if you get too far behind then there is there is a little bit of a reset that goes on there but uh um for short for short shutdowns it doesn't happen that way okay now um if you care about cpu throughput you might use first come first serve because that's the one that uses the things in the most efficient way if you care about average response time then you might want some approximation to srtf because remember srtf is optimal io throughput excuse me you might use an srtf approximation fairness well you might use cfs or if you're caring about the wait time to get to cpu perhaps you'd use round robin if you're interested in meeting deadlines you probably use edf one thing we didn't talk about in this class is rate monotonic scheduling which is a type of scheduling that's not as optimal as edf but you can actually do rate monotonic scheduling with a strict priority scheduler like we talked about is the top hundred priorities in linux and so you might do that instead of edf if you're interested in favoring important tasks you might use a strict priority scheduler okay all right so a final word on scheduling before we move on to to another topic is when do the details of the scheduling policy and fairness really matter when there aren't enough resources to go around so everything we've been talking about for scheduling is all about how do you choose to divide up your resources among a bunch of shared threads if you didn't care about resources you wouldn't have to schedule um or if you didn't care about uh you know if you didn't have more than if you had one thread for instance that's what i meant to say you wouldn't have to schedule okay so when there aren't enough resources to go around your scheduling policy might start to get really important okay and that's when you really have to be careful about your scheduler okay when should you just buy a faster computer so it could be the case that your resources are so scarce and you have so many things you have to run that your computer is just not fast enough and you know this goes with pretty much everything uh when might you need another network link or expand your highway or any any number of questions around the rates of uh restricted resources are all about how do you schedule and then if scheduling starts to fail when do you buy bigger faster larger things okay um one approach is you buy it when it's going to pay for itself and repro improved response time um perhaps you're paying for the uh for worse response time and reduced productivity customers being unhappy et cetera you might think you should buy a faster something when something's utilized 100 percent because then you know you you can't utilize it anymore but i want to tell you that running anything at 100 is always bad okay um you as an engineer should know that it's never want to run anything uh at 100 if there's any randomness in the system at all and the reason is that you start getting this queuing behavior like i've shown you in the curve here now we're going to talk about cueing theory in more depth uh in a little in a few weeks actually we may talk a little bit about it next week but in general you see a curve that looks like this with utilization on the x-axis something like response time on the y-axis and a a non-linear curve that starts out with a linear section in the low part but then rapidly starts rising okay and then when you're looking at the the regular models that um aren't realistic but totally mathematical this uh this high end near 100 goes to infinity of course we know nothing goes to infinity in real life but it always goes pretty high okay and so 100 is definitely not the time at which you want to buy something because you're already seeing this huge super linear increase in response time so your customers have already left you right so an interesting application we'll tell you where the curve comes from in another lecture but here one thing might be to say as long as i'm in the linear portion of utilization things are basically okay okay the moment i start getting to the point where it's super linear and things are going up faster than they were in the linear section that's when i start uh to worry about my resources so right around the knee of the curve is usually a good place to um consider buying something new okay and just to give you another instance of 100 being a bad idea if you know that a bridge can handle some maximum weight say call it you know 200 tons you do not want to be running at 100 on that bridge because you know that any sort of randomness is going to take you over the edge right you want to be running down in the linear place where the bridge is behaving normally okay all right good questions we'll tell you why this curve is super linear in a couple of lectures okay um so i actually had this as still grading when i did these slides earlier today but um i believe the grading is pretty close to done um so we'll get those out to you there's also um i know people have been waiting for the bins those bins are out there's going to be a little bit those bins represent final total points as they do in other classes but for midterms there is a an offset that you can use with the midterm from historical data that will let you interpret kind of how you did on the midterm um and we'll explain that uh later in a post but i will say that having we graded this and this midterm was clearly too long and i apologize for that it was definitely definitely hard harder than i think we were expecting so i guess we'll figure that out for the next one so my apologies there the other thing is group evaluations oh and just to cap this off i believe you'll be seeing the um release of great scope grades in in either tonight or maybe even uh early tomorrow but very soon and that'll be the process then you can start um putting in uh grade regrade requests and so on um group evaluations uh are coming up for project one in fact they may have been mailed out today or they will be tomorrow at the latest every person in the evaluations are going to get 20 points per other partner which you can hand out as you wish no points to yourself okay every term i have to say no points to yourself so this is not about saying well i've got four people in my group 20 times 4 is 80. i'm going to give 80 points to myself okay that doesn't work that way the way it works is you have three other partners 3 times 20 is 60 and you can hand those 60 points out anyway to your other partners they're going to evaluate you okay and um the reason we do this and by the way your tas are going to moderate what's uh what's being said here this is just one piece of information that we use to figure out how things are going but in principle projects are a zero-sum game and you have to participate in your group okay and there are some of you that seem to have fallen off the earth and aren't responding to email if you really don't participate at all and we have that documented in various ways then it's possible that some of your points may end up going to your partners instead of to you so this doesn't happen often but it's a way for us to really uh reward project members that um are working and uh have non-working team members okay so please try to try to work um make sure that if there are any group dynamic issues that your ta knows um and i think i offered that i'd be more than happy to sit down with groups to talk about ways of collaborating if that helps but make sure your ta knows any issues that you might be having with your group and let's see if we can make projects two and three even better um so are the point distributions per person anonymous so um the point distributions are in fact not uh handed out at all so those are purely for our information um they none of your team members know how you graded them and they don't um and you don't know how they graded you um but you're gonna uh talk to your tas and their tas will have a good idea how you're doing as well okay um the the other thing uh just to say about this is you know if you were 100 happy with your group members you could give your other three partners or whatever 20 points each and that would be an example of a uh fully um a very happy person uh with the rest of their group okay the other thing i mentioned was this notion of a group coffee hour uh look for opportunities i think in maybe the same email that we send out either group evaluations or our uh how are we doing a third of the way through the term uh survey um we're going to tell you how to basically give it get maybe get extra points for screenshots of you uh with uh your other team members on zoom you know thumbs up or beverage of choice or whatever we're going to call these group coffee hours okay and don't forget turn cameras on for discussion sessions uh if if at all possible all right that was a long administrivia um i realize it's uh really rough being in a fully virtual term and you know a third of the way through the term this is the point at which things start getting uh they seem hard and uh you people you sort of hit a a slow point but let's let's get our excitement back up and get moving and i know we have a bunch of really exciting topics still in the class so and i apologize that that midterm was too long i think we haven't fully figured out how to deal with virtual virtual midterms yet all right good so let's change topics so let's talk about deadlock uh i like to think of this as a deadly type of starvation so starvation as we've been talking about with scheduling uh as as our main instance certainly last time is a situation where a thread waits indefinitely an example might be a low priority thread waiting for resources constantly in use by a high priority thread of course the principle resource being cpu but other things can be there too this is a situation that could potentially resolve itself as soon as all the high priority threads are gone so it isn't a permanent scenario but it certainly might be annoying and it might be um you know not what you want because your thread's not running deadlock on the other hand is an unresolvable situation that's a starvation situation and it involves a circular waiting for resources so if you look here we have a situation where thread a is waiting for resource 2 but resource 2 is owned by thread b and thread b is waiting for resource 1 but resource 1 is owned by thread a so here's a cycle and as a result of this cycle both thread a and b are sleeping and will never wake up okay because you know a will never get notified that resource two is ready and b will never get notified that resource one is ready and uh nobody resolves itself and uh nobody's happy okay so notice that deadlock is a type of starvation but not vice versa okay and again starvations can end they don't have to but they can deadlocks can't there's no way to fix this cycle i'm showing you here without fundamentally doing something drastic like thread a killing it off then thread b could run or um i don't know trying to figure out how to temporarily take a resource away from somebody and then give it back okay and both of those situations are usually bad just randomly killing a thread probably isn't what you want to do and randomly taking a resource away from somebody probably gives you bad behavior okay so um what's a good example of a resource other than locks and semaphores so we'll talk about uh memory uh you know disk blocks pick any resource you like a queue um you know think anything that uh you might wait for is a situation where you might be in a pr uh run into problems okay now um you know another example could be that you're waiting for a particular cpu in some special machine that's uh attached in a way to some hardware that other cpus aren't attached to that could be an important resource that you're waiting for so pretty much anything that you need to complete your task that might need to be exclusively owned counts as a resource here okay did that help now here's the simplest example uh here's a bridge we have a lot of these in california um i was just uh out driving the roads last weekend and uh encountered one road where there was like three of these single lane bridges uh all because parts of the road had washed out and uh they never got fixed from the last rainy season so that's unfortunate but um you could imagine that uh this might be a source of deadlock under some circumstances so for instance you could view each segment of the road as a resource car has to own the resource that's under it of course and they may need to acquire the segment they're moving into in order to make any progress okay so um for instance if you have a bridge and let's just divide it in two halves you have to acquire both halves and traffic only in one direction at a time is clearly going to be required for that so here's a here's a bridge situation where there's two halves we have two cars that are on each half and we have a bad situation here because the two cars are meeting in the middle and can't make any progress okay and i've shown you here a cycle you know we have the minivan it's trying to get the eastern half of the bridge and we have the uh race cars trying to get the western half of the bridge the minivan owns the western half because it's on it and the and the race car owns the eastern half because it's on it and we have a cycle okay and um how do we resolve this deadlock well if we want to resolve the deadlock in a way that's uh reasonable one of the cars has to back up amusingly enough if you get two people that are unwilling to back up then you get a long term honking going on the other thing to note by the way is because of the ownership of resources prior it's possible that for instance in order for the green car to back up other cars have to back up and so there may be a whole chain of resources that have to be uh relinquished and reacquired only in order to undo that deadlock okay those of you that might have taken a database course like 168 or something like that might recognize that um 164 might recognize the situation as some sort of undo or transaction abort okay 186 that's what i meant sorry i'm being i'm being swapped tonight uh so the other thing that can show up in this scenario is starvation if for some reason one direction say uh you know east or west to east is just going so fast that no other car gets in that's actually a type of starvation here not deadlock because um you know as soon as there's no more of that traffic then the other traffic can go all right so let's look at um deadlock with locks here since uh this seems like the simplest thing to start looking at so here here are two threads um this is a situation where the municipality might need another lane right well as i mentioned on that road i was on literally there couldn't be another lane because it was it was uh washed out and there was um just those jersey barriers around there to prevent you from going into the creek so um i would say the local municipality uh wasn't able to fix it so here's a situation where thread a and thread b um look as follows they both are have mutex x and y but thread a doesn't acquire uh of x and an acquire of y it does some stuff then it releases y and then releases x thread b on the other hand acquires y and then acquires x does some stuff releases x releases y so this lock pattern seems simple it seems like something you could write by accident if you weren't thinking about it because you got two resources x and y you need them both uh you write one in one order and the other in the other order and um the problem with this is that this is a non-deterministic failure okay and there's nothing worse than writing something that fails non-deterministically because you can't reproduce it to start with i'm sure some of you have started to run into problems like that in uh in the code that you're writing and um you know and the other worst thing is it's going to occur at the worst possible time now if you remember when i was telling you about the murphy's law scheduler or the malicious scheduler view this is a situation where the scheduler will find the bad situation and they'll do it at the worst possible time now let me show you a little bit about the unlucky case so fred a acquires x thread b acquires y now notice the interleaving going on here right thread a tries to acquire a y but it's stalled because it y is acquired by b thread b tries to acquire x but it's stalled and now the rest of that code never runs okay so this is a deadlock and if you notice here so thread a is kind of waiting for mutex y and thread b is waiting for mutex x and neither of these are going to give it up and so basically we are stalled okay neither thread gets to run we've got deadlock but let's look at the lucky case oops sorry about threading b so the lucky case here thread a acquires x and y then thread b comes along and acquires y uh it tries to acquire y but notice that it's uh stuck okay and then thread a releases y and releases x then b finally gets to acquire y then it acquires x and it runs and the schedule doesn't trigger deadlock and if you think about what's involved in getting that exact deadlock case to happen well the scheduler has to line up at exactly the wrong time with this previous case here to get the deadlock most of the time it won't happen and you'll get the lucky case so here you are you ship something to customers and you get a call at 3 37 in the morning because the thing is deadlocked because most of the time you're seeing the lucky case but you didn't see the unlucky case when you were testing okay and the larger amount of code that isn't in your lock uh case so like here you know we have a few instructions here doing locking but we have a lot of code maybe in the critical section and a lot of code outside the critical section you know it's from a probability standpoint it's just not a high probability event but boy when it happens you are toast okay questions all right everybody good now let me show you another case so here's a here's another circular dependency that's a little bit different but it's similar okay and i'll tell you why i'm calling this a wormhole rooted network in a moment but for now this is there is some trains here they're long trains there's a little tiny train over there too but these long trains stretch uh for a while a while since they're long and what we've got here is each train is trying to turn right so this uh this eastern facing train's trying to go south the south trains trying to go west the west train is trying to go north the north strain is trying to go east and they're blocked because the resource they need which is for instance this west east train is trying to grab that segment immediately after the turn but it can't because there's a train in it okay and this is actually a very similar problem to what you get in a multi-processor network okay so this is a situation where um where you've got basically a wormhole rooted network with messages that trail through the network like a worm so instead of trains what we've got is we've got a routing flit at the head and then the body of the messages kind of stretch out all the way back to the source of the messages okay so that's called wormhole rooted networking because it looks like a worm and it's rooted as that worm all the way through the network okay and here we've just developed a deadlock okay so how do we fix this well what you do in the network case this may not be as practical in a train except maybe in the metropolitan area is you you make a grid that extends in all directions and then you force an ordering of the channels okay and the protocol will be you always go east-west first and then north-south okay so what we've just done is we've disallowed by this rule these two uh parts of the turn so this red turn here and this red turn there so you're not allowed to go north first and then east you're also not allowed to go south first and then west and by disallowing those two turns you will never get deadlocked because you can't fundamentally get a cycle out of it in fact you can even write a proof that shows that this network has no deadlocks in it because a deadlock would require a cycle and a cycle would always require at least one of these disallowed turns okay now again this is not as practical in a trained network but certainly in a network network if you have a mesh what you can say is i always have to route east west first and then north south and as a result you can end up with no deadlocks okay questions all right now by the way this is a real xy routing is a real thing or you look up dimension ordered routing there are there are real networks that behave that way including uh you know the interior networks that are part of the intel chips so this is a this is a real thing and it's a way of avoiding deadlock so it's kind of nice because you can avoid it mathematically other types of deadlocks there are many of them right so threads block waiting for resources like locks and terminals and printer printers and drives and memory threads might be blocked waiting for other threads like pipes and sockets you can deadlock pretty much on anything like that and all it requires is getting some sort of cycle involved okay so we might want to figure out a little bit about how to avoid these kind of deadlocks okay so here's an example of one with space right so here thread a alec is going to do an allocator weight one megabyte and then another megabyte and then free free and thread does the same thing well if there's only two megabytes total of space you can imagine that a gets a megabyte then b gets a megabyte and uh we're now deadlocked in just the same cycle as before but it looks a little different okay and we'll talk about how to think about cycles that uh that have resources where there's multiple equivalent pieces of the same resource in a little bit so in order to move our way along this let's talk about what i like to call the dining lawyers problem so we have five chopsticks and five lawyers okay and a really cheap restaurant and it's a free-for-all so what we do is we put one chopstick in between each lawyer okay and the lawyers are going to grab and by the way nothing against lawyers this is just the example here but you need two chopsticks to eat and um if everybody grabs the chopstick on their right we now have deadlock because nobody can can eat okay so that's a that's a deadlock it's a larger cycle than just uh you know two resources and two threads but it's still a deadlock because there's a cycle so how do you fix the deadlock well you could make one of them give up a chopstick and eventually everybody gets a chance to eat oh and by the way this is such a cheap restaurant that you have to share the chopsticks after they've been used and you put them down so perhaps during a pandemic you wouldn't want to do this solution um how do you prevent a deadlock well that's more interesting right so the way you might prevent a deadlock here is to never let a lawyer take the last chopstick if no hungry lawyer has two chopsticks afterwards now wait a minute what does that mean if you never let a lawyer take the last chopstick if as a result of taking that no other lawyer has two chopsticks then you know that there's always somebody that can finish dinner and lay down their two chopsticks and then let somebody else go forward okay so there is a solution to this that involves uh looking ahead that maybe we can formalize in some way okay but to do that we need to talk a little bit more about deadlock so what is required what's the minimum requirements to run into a deadlock well first and foremost mutual exclusion is a requirement so that says that we have resources that can be possessed exclusively by a thread such that no other thread can use them okay so remember we've been talking about mutual exclusion as a way of keeping a multiple threads out from the middle of a particular block of code this is the same idea but this is for general resources we're saying that we have resources that can be mutually held onto by one thread and requested by another but not acquired until the first thread is done with them that's mutual exclusion the second is this idea of hold and weight which says that if a thread has multiple resources it's already acquired and it's waiting to acquire another one then it's going to hold on to all the resources that it's got so it's what happens is it grab grab grab resources tries to grab the next one and it can't but it's gonna hold on to all the other ones okay so you need to not only have mutual exclusion of resources but you gotta be able to uh have a situation where you hold them and wait on them there also needs to be a situation with no preemption so not only do you hold resources uh while you're waiting for other ones but it's not possible to take a resource away from somebody okay and that's kind of like if you think about the bridge example um what would be uh what would be a preemption case there well that would be godzilla comes by grabs one of the cars that's honking and uh tosses it into the other valley and now we've just broken the deadlock okay so we're assuming that something like that can't happen i'm assuming you all know who godzilla is but perhaps i'm dating myself on that and the third thing there's you need or four things excuse me is you need to have a circular weight where there exists some set of threads that are waiting t1 through n where t1 is waiting for something held by t2 t2 is waiting for something held by t3 etc tn is waiting for something held by t1 etc all right and now as a result um we uh we have a cycle okay if you don't have a cycle of waiting there is no deadlock now what i want to make sure i'm clear on here is you can have all these things and not have deadlock but if you don't have one of these things you don't have deadlock okay so these are minimum requirements but they're not sufficient they're just they're just necessary okay so we're getting somewhere and if you were to think through all of the examples of deadlock i've shown so far it had all of these properties to it so let's talk about how to detect deadlock and to do that we're going to build a resource allocation graph so here's our model we have threads which are going to be circles with t sub something in them we're going to have resources which are going to be rectangles and uh we'll call them r1 r2 etc and notice the number of dots in the rectangle represents the number of instances of that resource in the system so these are all equivalent excuse me so in the case of memory remember that example i showed you a little bit ago where we were allocating one megabyte and then one megabyte and then one megabyte each megabyte is equivalent in those instances so we would build that as a rectangle with a bunch of dots representing all the equivalent megabytes and we'd call that a resource okay the resources which were mutexes or locks that we were talking about earlier might be an example here of a square with a single dot in it okay every thread is going to utilize a resource by first requesting it then using it then releasing it okay and this notion of request use release is kind of that that idea of mutual exclusion where between request and release if i'm in the use phase uh nobody else can use that particular resource but that means a particular dot is now used not all of the resources that are equivalent okay so our resource allocation graph is very simple okay it's a it's partitioned into two types of nodes t nodes and r nodes and we're going to build that graph where there's a request edge uh which is sort of t one to r j and that basically says that thread one wants resource j or an assignment edge r j to t one t i which basically says that r j is owned by t i okay and that's going to build a graph for us and then we're going to go through that graph and figure out whether we have deadlock okay i have some examples here so remember the model is request edge and assignment edges and so here's a simple example so here's an example of threads one two and three thread one uh is requesting resource one that's what this uh request edge looks like here we have an assignment edge r1 is owned by t2 okay so this is and everybody see that so here's an instance where r4 there's three possibilities there but only one of them is currently owned by t3 okay everybody with me so far now once we have a graph like this then we can do graph operations on it and very quickly decide whether it's deadlocked so for instance here's an example of a graph with a deadlock now it's not your simple deadlock but if you look here it's got t1 is waiting for r1 but r1 is owned by t2 r3 is owned by t1 one of the instances and the other instance of r3 is owned by t2 t2 is waiting for r2 but r2 is waiting for t is owned by t3 and finally t3 is uh waiting for r3 and if you look at this scenario this is an unresolvable situation where there's no way that any of the threads can advance and make forward progress okay now so good question uh so the question was so a cycle leads to deadlock no a deadlock needs a cycle very important here a cycle is merely necessary for deadlock not sufficient so for instance good question i i clearly paid him for uh to ask that question if you look here here's an example of a cycle but no deadlock so notice that t1 is waiting for r1 r1 is owned by t3 one of our ones is owned by t3 t3 is waiting for r2 one of the r2s is owned by t1 so there's a cycle here but what we also see is that if t4 finishes it'll free up an r2 and then t3 can get what it needs and it'll finish and then it can free up in r1 and then t1 can finish so just because you have a cycle doesn't mean you have a deadlock but if you have a deadlock you know you have a cycle all right good so now we have we're armed and we can figure out how to detect deadlock right so here's a simple algorithm and what it the key thing about this algorithm is just understanding the um the symbols here so i'm going to have a vector of resources so this is a vector it's going to be a comma a comma separated list and for each resource r1 r2 r3 r4 i'm going to say how many of those resources are free so in this case here r1 and r2 are completely taken so we're going to have 0 comma 0. we also have current requests from thread x so if you notice for instance t1 is currently requesting an r1 so we're going to but it's not requesting an r2 so the request for uh t1 is going to be one comma zero the allocation for t1 well it owns an r2 but not an r1 the allocation will be zero comma one okay so these are just vectors of free resources numbers of free resources and how much being requested and how much is allocated by each thread so if you can get past that then it's very easy to do this we just do a list-based algorithm where we set the we say the total number of available resources is the vector of free resources we put all the nodes um excuse me i should say all the threads into the unfinished bin and then we're going to do we're going to start by saying well i'm done equal true and then i'm going to go through and for every node that's in the unfinished bin i'm going to say well is there enough nodes available enough resources available of each type that i can get what i want my i'm currently requesting and if the answer is yes then i fini i figure out that as a thread i can get all of those resources so i'm going to be able to finish and i'm going to renew remove the node from the unfinished bin and then i'm going to add all of its resources back into the available pot because i'm now done i'm going to say with that thread i'm going to say that i'm done with this algorithm i'm going to set it to false and and then i'm going to go keep going and when i'm done with the first do loop i'm going to say gee did was there any thread that finished as a result of going through if the answer is no then i'm done and as a result i've got some nodes left and unfinished and i'm deadlocked because there's no way to finish this on the other hand if i did pull a thread out in that pass i'd go back and try it again and i just keep looping as long as threads are finishing and if i eventually finish everybody then i know there's no deadlock okay so how do i know there's no deadlock because there is a path where threads can complete one at a time and will eventually everybody will be finished and i won't exceed the uh the total resources in the system and each thread as it finishes puts the resources back in the pot and um and then potentially those can be used by other threads okay and if i did that let's see um so the question here uh is basically this is all fine and dandy but is it possible that uh we could have a situation where one thread gets a resource uh and as a result uh other threads can't finish and you end up with deadlock i think that was the question and the answer here is if you notice this deadlock algorithm is very careful okay it's saying if a thread can get all of the things it needs all of it right all of its requested remaining resources it can get them all at once then i'll declare it finished and put all of its resources back in the pot and then as a result i haven't prevented anybody else from running all i've done is freed up my own resources which they might potentially use okay so this particular deadlock detection algorithm is saying is there any path that i could take through the threads that would let them all finish okay did that answer that question great so um how do we so we can detect deadlock but how do we deal with it oh by the way um can anybody tell me if i run this algorithm and i see there is no deadlock according to this algorithm does that mean that all threads will finish i've got both yes and no on here okay anybody want to argue both with a question mark nobody wants to argue okay so the answer to this is no but it's not the fault of the algorithm okay you got to be careful about what is this algorithm telling me it's telling me that if the threads are asking for resources they need they use the resources they free them up then other threads can go forward we're all happy but if a thread goes into an infinite loop or something else happens or doesn't free up the resources for some reason then this algorithm really doesn't tell you anything right so this this algorithm is assuming that the threads are really just requesting resources and freeing them up and not doing anything else stupid like going into an infinite loop okay so or asking for more things than they originally said they need okay so this is this is a very restricted algorithmic algorithmic result here okay now how should a system deal once it's discovered deadlock okay we have four approaches here that i wanted to mention one is deadlock prevention so this is a situation where you write your code in a way so it will never deadlock okay now i think i showed you that earlier uh when we talked about removing the cycles from the network by uh eliminating certain directions of travel right so that would be a prevention scenario deadlock recovery is a situation where you let the deadlock happen and then you figure out how to recover from it okay that's the godzilla approach deadlock avoidance is dynamically delay the resource request so that even though in principle you could get a deadlock it doesn't happen and then finally i like to put this last one out because this one's important and you should all know this exists i call this deadlock denial or deadlock denialism okay so this is ignoring the possibility of deadlock and claiming that it never happens okay and so modern operating systems kind of make sure the system itself isn't involved in any deadlocks and then pretty much ignores all the other deadlock and applications i like to call this the ostrich algorithm okay so this is why sometimes you have something running and you got to reboot the operating system to fix something okay that that oftentimes is because there's some deadlock that nobody uh planned for nobody detected and nobody uh had any way to deal with other than just rebooting things okay and unfortunately that's a much more common than you might think all right so let's talk a little bit about uh prevention here for instance so one thing you can do is put infinite resources together okay so that's uh you know infinites big right but what we're really saying is you include enough resources so that no one ever really runs out of them doesn't have to be infinite just really big and you give the illusion of infinite resources so a nice example of that might be virtual memory which under most circumstances appears pretty big right um another somewhat less practical example might be the bay bridge with 12 000 lanes you never wait okay so that might be nice um never going to happen right infinite disk space well we're pretty close to that in a lot of instances right you can buy a hundred terabyte um disk drive these day these days uh that's using flash memory and that's pretty big okay um you could decide to never share resources if you think about the cycles we're talking about earlier cycles require that there's a resource that's being used by one person that is uh needed by somebody else right if you never have any need for sharing you'll never have any deadlock because you can't come up with a cycle another option would be never allow waiting so notice what i'm doing here by the way is i'm removing uh one of those four requirements for deadlock right so not allowing waiting is really how phone systems work it used to be a lot more common it still happens occasionally where you try to call somebody and it the call phone call actually works its way through the phone network but it gets blocked somewhere because there's not enough resources and what happens is you get a busy signal what's really happening there is it's it bounces the call and it assumes that you're going to retry by making the call again okay so what they've done there is they've avoided deadlock in the network by pushing you off to doing a retry okay this is actually the technique used in ethernet in some multi-processor networks where you allow everybody to speak at once on a segment and if there's a collision then what happens is you exponentially back off with some randomness and retry and as a result the the problem goes away okay so this is a technique of random retry instead of a potential deadlock we'll talk a little bit more about that uh later in the term um now it can be inefficient if you don't use the right algorithms you know the goofy thing here is you consider driving to san francisco and the moment you hit a traffic jam you're instantly teleported back home and have to retry that would be an example of uh you know a retry mechanism that probably would never work because you could never make it through so if you're really going to reject and force a retry there has to be some notion that there's going to be eventual success on that channel okay here's an example of that virtually infinite resource i mentioned earlier while we said this could deadlock if there's only two megabytes in the system but with virtual memory you have effectively infinite space so everyone just gets to go through and you won't deadlock okay now of course it's not actually infinite but it's certainly a lot larger than two megabytes all right um how do we prevent deadlocks okay maybe this is a little more interesting so you make all threads request everything they need at the beginning and then you check and see if you've got enough resources and if you do you get to go and if you don't you don't okay so if you think about that there'll never be a situation where you're in the middle of execution you have some resources you're waiting for others you've just basically removed the weight portion of that cycle okay the problem here of course is predicting the future as to what you need for resources and you often end up overestimating um for the example if you need two chopsticks you request both at the same time that may or may not work well i have imagined reserving the bay bridge that wouldn't work too well either you don't leave home until you know no one's using any intersection between here and where you want to go that actually works pretty well if you're traveling around uh you know 1 30 at night across the bay bridge there might be enough channels there or lanes to know for sure you're going to make it without being delayed you could force all the threads to request resources in a particular order all right well this is more interesting so for instance to prevent deadlock you always acquire x and then y and then z okay so if you always acquire x and then y then z you can prove fairly simply that that'll never deadlock because any deadlock involving those resources would have to be a cycle and therefore a cycle would mean that some thread acquired something like z and then acquired x or z and then acquired y so any actual cycle in a supposed deadlock would show you going backwards in your acquisition and as a result can't happen because you always have to get x and then y and then c and this by the way is exactly that dimension order uh routing that we talked about in uh multiple or trained networks earlier right so here's an example so rather than what we showed earlier where you could get x and then y and for a and then y and then x and b you just uh maybe acquire them both okay so here we get both x and y both x y and x whatever either you get them all or you don't and as a result there's no cycle that's the first thing i showed you the second was you maybe get a lock around you grab z okay there's no cycle around z and if you happen to acquire z then you can acquire what you want okay and that won't deadlock because there's no cycle or here's the consistent order so rather x then y y then x what you do is you always go x then y x then y okay and as a result it'll never deadlock okay now does it matter which order the locks are released like notice uh here we always go x then y x then y but here i'm releasing y then x and here i'm releasing x then y doesn't matter okay good it doesn't matter because the only thing i do with releasing is i'm letting people go forward i'm not holding them up right so the releasing can be done in the order it's the acquisition that has to happen in the uh the same order i will say however though typically you acquire them in one order and you release them in another and that's just a way of making sure that um you've got a nice clean pattern there and this is kind of what we looked at when we were talking about um the finite buffer queue when we were when we were looking at semaphores a little while ago train example right here is the fixing ordering of the channels you're always getting the x channel and then the y channel and as a result there's no way to have a cycle because any cycle would show you having the y channel first and then the x channel okay all right and this works in multiple dimen you know more dimensions you can have x y z w whatever as long as you act you get them in a given order then you don't have deadlock so how can we recover from deadlock so here you could terminate the thread force it to give up resources so that i told you about the godzilla solution earlier you hold the dining lawyer in contempt and take them away in handcuffs you make sure you get the chopsticks first but it's not always possible because killing a thread holding a mutex would actually leave things inconsistent and probably screw everything else up okay so taking things away is rarely a good thing the one instance you could preempt resources without killing the thread but then again the resources think they have the resources excuse me the threads think they have the resources exclusively you've just taken them away the thread's going to not behave correctly okay so the one case where this actually works out well is when you have enough information to do a full roll back or an abort and this is sort of the database idea where before you grab your locks you have a checkpoint in the of the state and now you you go ahead and start running if there's ever a deadlock you roll back to a time prior to the deadlock and you restart things maybe with some randomness so the the deadlock doesn't happen again this is a very common technique the databases can use because they can roll back to a consistent state before they retry after detecting a deadlock so this is the one instance where you can just back up and take resources away and retry it and make sure the deadlock doesn't happen again okay many operating systems have other options um but i will say that unix operations operating systems often use the denialism technique so you know here's this other view of virtual memory so we said well we could think of intimate space this isn't a problem if you look one level deep which will be appropriate one level deeper which will be appropriate in the next lecture is we could say that what actually happens when we run out of memory for one of the threads is we preempt that memory paging it out to disk and giving it back later when we page it back in and as a result we can take memory from one thread that means now dram physical memory give it to a different thread and everything's okay because when we come back we grab the data and give it back to the thread we took it away from and we don't let the thread look at it in the middle and so we find a way to suspend the use save the state and avoid the deadlock okay and that's kind of what paging does okay let's talk about avoiding okay so when a thread requests a resource the operating system checks and sees would it result in deadlock if not it grants the resource right away if so if there's going to be a deadlock it waits for the other threads to release resources so this almost sounds good right so the idea is we we somehow look and we say will this re will this thing we're giving it have a deadlock if so don't give it the resource otherwise do and the issue here is let's show this example here's our thread a and b that could deadlock we acquire x no deadlock there we acquire y there's no deadlock there there's no cycle we acquire y still no deadlock okay now notice at this point thread a is blocking because it's trying to acquire a resource b has we still don't have a cycle because b is happily running here we say oh if we acquire x we're going to have a cycle and therefore deadlock so we'll wait problem is it's already too late because there's already impossible it's already impossible for this situation to resolve even though there isn't a cycle at the moment you try to acquire x so we have to do something a little better here and so here i'm going to introduce three states there's the safe state which is the system can delay the resource acquisition to prevent deadlock so this is a situation where we can make forward progress and we won't deadlock there's deadlock where we're in trouble right and we already have a cycle and then there's an unsafe state where there's no deadlock yet but threads could request resources in a pattern that will unavoidably lead to deadlock that's what we had in that previous slide we already had an unsafe state okay and actually the deadlock states considered unsafe as well because once you're dead locked it's not safe so deadlock avoidance is preventing the system from reaching an unsafe state so how do we do that so for instance when a thread requests a resource os checks and sees if it would result in an unsafe state if not it grants the resource if so it waits so how this changes our example is thread a grabs x everybody's good thread b tries to grab y but we look and we say oh if we acquire y we're already down the path to an in an inevitable deadlock and therefore thread b is not even allowed to acquire y okay if we could come up with that then what happens is b goes you know to sleep and it's stalled a goes on to acquire y does its thing releases the two at that point b gets released and now we're good to go and we don't have any deadlock so this algorithm that i've sort of implied has somehow kept us in a safe state and therefore we don't deadlock and that's kind of what we'd like to do so we have something called the baker's algorithm so toward the right idea is state the maximum at the beginning of resources you need and you allow a particular thread to proceed only if the total number of resources minus the number i'm requesting still says that there's an amount it's uh the remaining is greater than the maximum that anybody needs so we take the current available minus what i'm asking for and as long as what's left is greater or than or equal to the max than anybody will need i'm good to go why is that okay well that says basically that um gee even though i've been given these resources there's always somebody that can complete so this is not quite what we want this is a little too uh conservative okay instead the banker's algorithm is a little less conservative and it's going to let you ask for resources free them ask for them free them so on and what we're going to do is every time somebody asks for a request we'll grant it as long as there is some way for the threads to run such that they will complete without a deadlock so we only run we only grant a resource if there's some way to complete without a deadlock so the technique here is to pretend that we grant the resource that's being asked for and then we run the deadlock detection algorithm and in that case we're going to substitute this which is say take the maximum that anybody wants take the maximum that a given node wants minus the amount they have um and see whether it's less than what's available and we're going to replace that for what we asked about earlier which is um you know seeing whether what we're requesting is is uh less than what's available okay and so here notice that in this deadlock detection algorithm we're going to say that for every node instead of if requesting the amount we're requesting is less than what's available we're going to say if we take the maximum we need minus how much we have is less than what's available then we're gonna we can finish okay and so this is like that simulation that i talked about earlier where for um we temporarily grant the thread that's asking for something and then we go through and we say is there a way to let some thread finish and then let some other thread finish and so on as long as there's still a path through that'll allow the thread some the whole set of threads to finish we're not dead locked and we're still in a safe state okay so basically that algorithm as simple as it is which is substituting this into the deadlock freedom algorithm keeps the system in a safe state which says there's always a sequence t1 t2 to tn where t1 completes then t2 completes and so on there's always a path out even if i pretend to give the resource and if that's true then i go ahead and give the resource okay so the way you need to think of the banker's algorithm i realize i'm a little bit low on time just give me a few more minutes the way to think about the bankers algorithm is that it's a simulation of what would happen if i granted the resource to the thread would i still be able to find a way out such that every thread completes and if the answer is yes then go ahead and give it okay and this is a an actual algorithm that we could run on every acquire and release of every resource that would prevent us from deadlocking and it would actually uh do that run that uh example i showed you earlier where we grab x and grab y and then the other case grab y and then x this particular algorithm would actually prevent that from ever deadlocking because thread b would be forced to wait until thread a was fully done in that instance okay so in some examples here if you think through the banker's algorithm what you would find is that a safe state which is one that doesn't uh cause deadlock is if when you try to grab a chopstick either it's not the last chopstick or it is the last chopstick but somebody else has two if either of these conditions are correct then um you're going to still be in a safe state and you can allow that chopstick to be acquired where this gets a little bit amusing as you can imagine the k-handed lawyer case which is you don't allow if it's the last one and no one would have k chopsticks or it's the second to last one and no one would have k minus one and so on okay so um we're about done here yes uh we can actually do a a uh k-handed lawyers case so i want to pause for two seconds about whether there's any uh questions here and then i'm gonna finish up so you have to think about the the way you think about the banker's algorithm is on every acquire or release of every resource you pretend that you give that resource the colonel does this it pretends it's going to give that resource it runs that special deadlock detection algorithm and says am i going to go into deadlock uh or is there a path out of this if there's a path out then it will grant the resource if there is not a path out then it won't grant the resource and instead put that thread to sleep until there are enough resources all right good so the question of course is do most os implement this so as i already told you unix essentially uses the ostrich approach or uh deadlock denialism however uh if you care you can implement this so there are some specialized os's that do do this the second thing that's kind of interesting is i think shown by this page here which is you can use the the uh bankers algorithm to design a way of accessing resources that won't deadlock so you can use the banker's algorithm as a way of designing how you go about asking for resources in a given in a given application then you don't actually need the banker's algorithm running live because you've set it up so that it's running as part of your actual application okay and you could even run a banker's algorithm library inside of a cell of an application instead of the operating system all right so we talked about the four conditions for deadlocks we talked about mutual exclusion which is that when you get a resource you have exclusively hold of it hold in weight which is i hold on to other resources while i'm waiting for ones that i'm looking for no preemption says i can't take resources away circular weight there's at least a cycle in the system all four of these are required these are necessary but not sufficient for deadlock we talked about techniques for addre for uh basically addressing the deadlock problem we can either prevent it by writing our code so it won't be prone to deadlock we can um that includes things like dimension order routing we can avoid uh so that's avoiding the deadlock we can recover from deadlocks by uh rolling back we can avoid it entirely by something like the bankers algorithm or we can totally ignore the possibility which unfortunately a lot of things do all right i think we're good for today um and look forward to the results of the midterm grading coming out and again it was a little longer than we intended we apologize for that i hope you have a great weekend um everybody i hope that you can get outside and that the air improves a little bit ciao |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_16_Memory_4_Demand_Paging_Policies.txt | welcome back everybody uh it's hard to believe but uh we're on lecture 16. term has been flying by and we've been talking about virtual memory and so uh we're gonna continue in that vein today um i wanted to fill out a little bit more of the caching discussion that we had last time just to remind you again about average memory access time this is 61c material that hopefully you'll be familiar with but um if you remember the average memory access time is uh composed of two different hit times probabilistically now we're in this situation where we have a processor talking to say an l1 cache which is talking to dram and the trick is to figure out how the cache improves our performance and to do that we have an aggregate hit rate probability that we come up with the cache and the hit rate is the percentage of the time that uh we actually get a hit in the um cash and then the miss rate of course is one minus that the hit time is how long it takes when we hit and the missed times how long it takes when we miss so this is not rocket science here but um i just wanted to put up some uh definitions and equalities here just to uh to make sure we're all on the same page here clearly the hit rate plus the miss rate better be one or something weird's going on um the other thing is the hit time which is the time to get from the l1 cache is actually a part of the miss time as well so when we talk about miss time it's not only the penalty which is going down to the dram and pulling it into the l1 cache but then we have one more hit time afterwards so this missed time we're talking about up here is actually hit time plus miss penalty okay and then of course the missed penalty is the average time to go from the lower level which in this case is dram all right so this is all 61c if you were to take some of these uh blue items and put them into the original equation and rearrange things a little you'd actually see that another way to talk about average memory access time which is the one i often prefer uh for a reason i'll show you in just a second it's really the hit time plus the fraction of time you miss times the missed penalty okay and um so why do i like this one better well suppose we've got more levels okay so we have an l1 cache and an l2 cache to dram well then we can just take our new equation that was in red up there we say the average memory x time average memory access time is the time to hit in the l1 cache plus the miss rate in the l1 cache times the missed penalty okay i haven't done anything but just copy here but what's interesting about this is what's the missed penalty in the l1 well that's the time to get it out of this l2 uh dram combination and so if you notice the missed penalty of l1 is just the average time to get from the lower level which is really just the hit time of l2 plus the miss rate of l2 times the miss penalty of l2 okay and you can do this recursively this case that miss penalty of l2 is just the average time to fetch from the dram and so average memory access time of this total combination is the hit time at the l1 plus the miss rate at the l1 times and then in parentheses the hit rate excuse me the hit time of l2 plus the miss rate of l2 times the miss penalty of l2 okay and so on so you can keep doing this recursively modern chips like the ones we've been talking about typically have three levels of cache on chip these days l1 and l2 is part of the core and then every core has an associated slice of l3 for instance and then there are many of the cores on chip all right good now the other thing uh we've been talking about of course is caching in general applied to translation and that gave us our translation look aside buffer or tlb and um okay there's a good question here that's on the chat which is could you start accessing dram in parallel with the cache yes in fact that's an optimization that's often done with uh server class chips where they'll actually start a dram access even while they're busy doing the uh checking in the cash the downside of that is twofold one you're burning energy because dram is one of the biggest uh non-pipeline consumers of energy so that's going to be expensive from an energy standpoint and it also means that if you're accessing a dram it means that somebody else couldn't be accessing a dram so if this dram is shared you've just slowed things down but in a typical server environment where maybe you don't worry quite as much about power but you really want performance then you could certainly uh start a fetch early even while you're looking in the cache so applying caching to tl to translation as we said we just basically have the tlb which is a cache and so we say here's a virtual address is it cached if the answer is yes we go ahead and we go to this physical memory which is a combination of caching and dram whatever because we have a physical address and so this is a very fast path on the other hand if we miss it then we've got to go to the mmu to translate which means walking the page table we get our result back we'll store it in the tlb cache for next time and then we'll go ahead and access and of course oftentimes there's the ability to go around the translation entirely so the question of this of course is one of page locality does it exist so that we'll mostly be hitting in the tlb and we kind of made the argument that uh yes there's there's a fair amount of page locality uh certainly in the instruction and stack uh accesses but even the data accesses have some good locality in them and we can build a tlb hierarchy if we want this lookup to be fast and so um rather than having a fully associative 128 or 512-way tlb we might have a small direct map cache as a first level tlb and a more highly associated associative second level cache and we just showed you the equations on the previous page of how you might analyze that okay now um so and again the other thing i wanted to point out this is a slightly different picture of the the hierarchy than i showed you before um what notice that between the dram and the secondary disk i've actually got uh flash or ssd storage which is uh pretty much more modern system i will point out that the page tables are in memory okay and so um we want we're hoping that those will mostly be cached the other thing i'm showing you here is that things like registers l1 l2 cache l3 cache main memory are all accessed in hardware and however the when we're talking about caching and as demand paging and that's what we're doing with this lecture then all of that's managed in software kind of by the os and so the main memory becomes a cache on ssd or disk and so um that's all going to be managed in software and today is going to be the day that we talk a lot about how do we manage that those page tables so that we get sort of the best result from our caching and notice the tlbs are up here very fast kind of at the speed of registers and then the page tables in dram and so on are are much slower by many orders of magnitude and so the key is going to be we're hoping that our tlbs get enough uh locality to help speed up the page tables okay so what we started uh talking about last time was demand paging mechanisms and the page table entries in the page table uh make it possible to build uh demand paging okay and so we know that in the intel chipset um well we know that in general ptes there's a valid bit and when that valid bits set equal to one the page is in memory and the hardware can go ahead and do the reference when it's not set to one or zero the page is not in memory and you get a page fault okay and um of course in the intel chips this is called present as opposed to valid but it's the same idea and so what i showed you here this is again something we had uh at the end um is basically suppose the user references a page with an invalid page table entry we hit a page fault then uh what do we do well the memory management unit at that point was walking its way through the page table found an invalid entry um and caused the page fault and so now we're going to have to do something which is we need to pull that thing off the disk for instance and what does the os do in this case well it has to find it on disk and then it's got to find space where is it which page is it going to put in dram which page is it going to use in dram to to handle this fault and the and at steady state all of the dram is potentially full and so the first thing we got to do before we can even handle the page fault is we got to choose an old page to replace okay and uh that's going to be a big topic for today how do we replace a page which one do we choose and so let's suppose we know how to do that already we picked a page the first thing we're going to do is say well if that page has been modified or the dirty bit is one we need to write the contents back to disk because it's got up to more up-to-date contents that are actually on than are actually on disk and so um we got to clear that page out before we can even use it and then we're going to change its page table entry in any cache tlb to be invalid okay if you remember the reason we want to do that is that we're reusing the page for something else and so the page table entry originally said this was a valid translation but we got to set it to invalid and then the tlb is a cache on the page table entry and the tlb is going to be incorrect and so we got to throw that tlb entry out okay and so some process lost a page in the process here and uh if it needs to get it back it'll of course get another page fault and pull it back in off the disk then we've got to load the new page into memory from the disk and of course that's a process that can take time a million instructions worth so while we're loading the new page off disk and actually while we're writing the contents back to disk if we do this inline then we're putting the process to sleep and then when the page comes back and the memory is now full with the new page we get to update the page table entry in the page table and invalidate any tlb for that entry okay and if you think about why we need to do this originally the tlb for that entry said invalid why because we got a page fault and so we're going to by invalidating or throwing that tlb entry out we know that when the processor retries that original access it'll cause a uh it'll miss in the tlb and we'll mule have to walk the page table okay and then we continue from the original location and this is really what makes this thing act like a cache and if you notice one of the key things in here is this very first item which is how do we choose the old page to replace and that's called the replacement policy and we um it turns out that's a topic of extreme important importance when we're dealing with demand paging and so that's going to be a topic uh that we need to cover today okay and the tlb for the new page uh gets reloaded when the thread continues because uh it doesn't have an entry for that address and it'll be pulled from the page table and of course i said that processes are suspended when we're going off of disk so i wanted to pause here and just see if there are any questions on this particular slide because there's a lot of content here and so i just want to pause and everybody take a breath and and tell me if there are any questions so good why invalidate the tlb for the new entry all right well can anybody tell me why okay so we're talking about this line here the second to last one and the the reason is we want to go back through the mmu again that's correct and the reason is that uh we've just changed the page table entry originally when we uh went and tried to do the reference we we looked into the page table and we pulled that page table entry into the t the tlb and that said invalid which is why we trapped in the first place so we we have to basically invalidate this purely so that it'll get reloaded from the page table uh and and then when it gets reloaded then it'll be valid the second time around okay does that make sense good any other questions okay the third step in the the fifth step uh you mean load new page or one two three change page table entry in any cache so the difference between these two is we're this is the page table entry we kicked out when we replaced the page this is the page table entry that uh we're filling and remember why um we're invalidating both of the tlb entries so remember there's one page in the dram that we're changing from belonging to process the original process to this new process of just page fault page faulted so that physical page is transferring ownership and in the process we have two tlbs one from the old process that owned the page which we have to invalidate so that's it and so it's invalid and one from the new page entry uh the new process and in that case we have to invalidate the tlb so that when we go back to the page table we get a valid entry so there's two tlbs involved here and we're invalidating both of them because they're both wrong okay this is the point usually where people say well why can't you just fix the tlb up and the answer is that most processors don't give you the option of modifying the tlb directly um there are there's a few uh architectures like mips that uh allow you to to mess with the tlb directly but most of them do not and so the best we can do is just invalidate them so that then the mmu will pull it in off of the page table which is the actual correct contents at that point okay good so here are our steps that uh in handling of a page fault i just wanted to show you this graphically so originally we tried to do say a load to address m that reference looks up in the page table and we notice that the page table is invalid that's what this i means here and we get a trap into the kernel which or which is a page fault okay and at that point we we realize that the page that we wants on disk so we start that access coming in off of disk and meanwhile we have also found a free page okay so on that previous slide i said we emptied that page out by writing it back to to disk potentially in a real system which we'll talk about later we typically have a free list full of pages that are available for use and we're constantly cleaning pages by moving sending them back out to disk to make sure we have free frames so let's assume we had a free one in that case we pull the page off the disk into the free frame when that's done we reset the page table to be valid and we invalidate the tlb we restart the instruction in this new time it'll work without a page fault all right now some questions we're going to need to answer like during a page fault where's the os get the free frame okay and on the two slides previous i kind of indicated well we find one on the fly and maybe we have to send dirty pages back to disk on the fly in reality like i said there's a free list but we'll get to the free list after we investigate our replacement policies a little bit more okay how are we going to all organize all these mechanisms they're going to be organized around the replacement policy and uh you know replacement policy being something like lru or random et cetera another question we're going to want to address is how many pages does each process get okay how many page frames so that means if i take my dram and i divide it up do i give every process an equal amount of dram or do i only give dram frames to the processes that need them most um perhaps that might be a a good policy if there was only a good way to figure that out we'll talk about that a little bit later another thing that we're kind of interested in here is allocating disk paging bandwidth because if you have a malicious or badly written program that's walking all over memory you're basically causing continuous page faults which is going to empty out the tlb it's going to slow everything down and every process in the system is going to hurt as a result and so one thing we might want to do even is figure out how to allocate uh paging bandwidth fairly okay so that's the type of scheduling so as we start here we need a working set model so what do i mean by that well here is uh the addresses here's the whole address space and here's time and what we see here is we see um at a given slice in time if we were to look in this red band here as it goes by at any given slice in time we can look at all the addresses that are currently in use and that's the blue one so we slice straight through here and those are all of the pages that have to be in dram and mapped and dram in order to make progress and notice a couple of things one it's not always the same pages which makes sense right as this uh let's go back and show you that amazing animation see that as that little red bar goes across what we see is our slices in time represent different pages that are in use and so part of uh there's a couple of things to learn about this and we're going to make this a little bit more formal in a second one is we got to have a good way as the working set which is the set of pages that are currently in use at any given time frame changes we need a way for that uh use of the dram to evolve so that uh where you we have the pages we need in memory and the ones we don't maybe are on disk so that somebody else can use the dram okay the second thing we're going to need to do is figure out how can we make sure that everybody's got their working set in memory because if we can't fit all of the working sets in memory then we're going to have thrashing and we're going to be in trouble and things aren't going to work properly all right now so we can look at how does cash size versus hit rate work in general so this is what's uh the working set model is uh how big of a chunk of dram or cash are we going to use and what you find is that as the cache size increases what you find is that there are certain plateaus where all of a sudden you reach a new stability point where now your hit rate is higher okay but before that you just didn't have enough cash and so you're missing all the time you hit a stability point where even as you vary the amount of pages number of pages available or the amount of cash available it's not changing things much and then finally finally you'll hit a size again which will go to a new plateau and so on and so the working set model we're showing you here kind of represents well i have enough cash for say this slice but not for this slice here which is a lot more addresses that might be this first plateau and then the second plateau might be a little bit more and so in general as you increase the cash size the hit rate is going to go up or at least that's what we're hoping to do okay and as we try transition from one working set to a next um we're hopefully going to kick out things we don't need anymore bring in things we do so that we can optimize the cache size we've got okay and of course just as with the regular hardware cache which we were reminding you about we're going to run into capacity conflict and compulsory misses okay potentially although uh in the case of the um the uh memory system and uh virtual memory we're probably not going to run into conflict misses too often why is that does anybody remember why are conflict misses unlikely to be an issue in virtual memory yeah great because effectively the way our page table works is any page any any uh address in the virtual address space can be mapped to any address in the physical address space so we effectively have fully associative caching in this case and therefore there aren't going to be any conflicts very good all right now um another model is this zip model which basically uh sorts pages by their popularity rank okay and what you see here is the uh the popularity uh is this blue curve that goes down as i go up in rank and the hit rate goes up all right but it goes up a little more slowly than in the working set model and so the issue with this is the likely of accessing an item of rank r is sort of one over r to the a where a is a constant small constant and so that's a um it's rare to access items below the top few because notice how the popularity drops off but there's a really long tail and so what this means is that a small amount of cash does a lot of work help to you but a large cash doesn't help you as much as you might think right in this particular model of locality is very common in uh web accesses and other things like that so it's going to be interesting to ask the question of do we have this kind of stair stepping working set model or do we have a zip style working set model and that's going to tell us something about how much we need it's definitely diminishing returns here okay this case by the way rank is is uh is equal to the size of the cache in some sense but what it's really talking about is if i take all of the pages in my virtual address space and i sort them by popularity the the most popular page is number one the second most is number two and so on and so yes if i think of this as cash size then um you know when i go out to 16 that means i can hold the 16 most popular pages okay so it's both you know it's the rank which can correspond to the the cash size and when it does we can figure out what our hit rate is all right so this particular distribution there's a substantial value from a tiny cache but very rapidly diminishing returns because of the long tail okay substantial misses from a large cache so let's see if we can use our uh come up with a cost model to see how important it is to make our replacement policy work well and keep our hit rate up so demand paging is kind of like caching and it is caching right so you can compute an average access time which we're going to call the effective access time here just so that we keep it distinct in our minds from average memory access time but we're going to use the same equations so uh the effective access time here is the hit rate uh in the dram times the hit time plus the miss rate times the miss time and that miss time is going to be um this missed time is going to be having something to do with going to disk okay now the question about why conflict so i'm going to answer this question that's in the chat about why conflict misses aren't a thing is because you only get conflicts when you have uh associativity that's not that's less than fully associative and with our page table we have a fully associative cache and so there are basically zero conflicts in that situation okay and that has that's not because of the tlb that's because of uh the page table maps any address to any other address so if we're trying to figure out the cost of of a situation where we have a limited amount of dram lots of disk and we can compute a hit time and a miss uh or excuse me a hit rate and a miss rate for accessing data in the k the cache then um what do we got here well let's try some numbers so typical memory access time to dram might be 200 nanoseconds um the time to to deal with a page fault might be eight milliseconds okay and suppose that uh p is the probability of miss and one minus p is the probability of a hit so then we can do this computation here okay so p is the probability of a miss so if it's in the dram it's 200 nanoseconds otherwise with some probability we have to go all the way out to the disk to bring it into dram it'll be eight milliseconds okay now i have to compute my units um convert them so that i have all the same units so that's nanoseconds okay so milliseconds is one thousandth nanoseconds is one billionth okay so you gotta make sure you know your units and so here's my effective access time is 200 nanoseconds plus p times 8 million nanoseconds and here's uh where this pays off right if one access out of a thousand causes a page fault then the effective access time is 8.2 microseconds so that's one out of a thousand axises causes a page fault what we've just done is we've slowed down the dram speed by a factor of 40. okay so that factor of 40 is uh potentially um quite high right so that that's pretty bad and that's a one out of 1000 accesses causes a page fault so you can see why it's incredibly important to um not have any page faults all right so if we want to slow down by say less than 10 percent then we can do a computation here where 200 nanoseconds times 1.1 basically is our maximum speed that we want and we can come up with the fact that our probability has to be less than 2.5 times 10 to the minus 6. so uh that's basically saying that if i want this effective access time to be no worse than 10 percent bigger than the dram time i can only have one page fault in 400 000 pages so it's extraordinarily important to never page fault okay so i'm going to pause on that it's extraordinarily important to to essentially never page fault because the moment you start page faulting that time to go to uh disk is so high that you just bring your performance to a grinding halt all right questions okay we good so do you have enough dram for that well that's a good question but it turns out it's the not quite the right question okay because this doesn't just depend on the amount of dram we've got it also depends on the access pattern so if you had a loop that only accessed one page over and over again forever then you could get a 100 percent hit rate no misses and you'd only need one page of dram all right and yes we do have one page of dram so the answer the question about is there enough dram to hit this slow down is going to be heavily application dependent it's going to depend on what the application's memory access pattern is how much dram we can give to it okay and so this brings up the interesting question of should we try to predict the access pattern maybe or maybe we should try to do some observations and see if things that are missing too often if we can give them slightly more dram and things that maybe are just hitting all the time or really frequently maybe we can take some memory away from them without a problem and maybe we can come up with a dynamic uh policy for redistributing pages okay so that's a that's a good observation and uh one that we'll come to in a little bit but um so there they come the um the thing to get out of this particular slide is the extreme importance of uh not page faulting which really means we've got to be very careful that the pages we have in memory are the right pages so that we don't miss and that uh if we have the right pages in memory we've got to be very careful not to throw them out incorrectly so that's where the the replacement policy comes in okay excuse me we got to make sure that uh no matter what if we have to find a new dram page because somebody needs one we don't want to throw out a page that's going to be useful for us okay because if we do that then we're going to start taking an extra 8 million instructions or a million instructions to do something eight milliseconds and that could be a problem okay so what factors lead to misses in the page cache well we are once again back to the the three c's with uh the fact that there aren't any actual conflict misses but first and foremost we have compulsory misses and these are pages that have never been paged into memory before so uh the best we can do with these of course is pre-fetching predicting the future somehow okay so this is not quite what uh the previous uh requester had said in the chat but if we can somehow find out that a process is walking its way through memory maybe we can have a prefetcher that's already got the next page coming off of disk so that by the time we get to it it's likely to already be in memory so that would be a way to get rid of compulsory misses okay and there is some pre-fetching that goes on in modern operating systems capacity misses are cases where we just don't have enough memory okay and so in those cases maybe if we start getting a lot of capacity misses maybe we start adding a little more dram to a given process to see if that'll help okay now um you know so one option is actually increasing the amount of dram but the problem with that is you got to shut everything down put in some new sims with dram in it and start things up again um we'll leave that option off the table for now because that's uh the drastic option requiring buying more stuff right another option is basically if you have a bunch of processes maybe we can readjust the uh who's using what dram to get a better overall page misbehavior okay conflict misses as we already said don't exist in virtual memory since it's fully associative all right so that's good now if you remember back a lecture or two ago what did i say i said the three c's plus one because there was um you know there was an extra c that we tossed in there caused by the cache coherence coherence misses in this case we're actually not going to have a fourth c we're going to have something called p which is a policy miss and this is caused when a page was in memory but it was kicked out because of a bad replacement policy and so what our next sort of i'm going to say third to a half of the lecture is going to be about here is how do we avoid policy misses because those are drastically bad in the case of paging because we have to go to disk and burn a million instructions worth of execution time all right so how do we fix better replacement policy all right so let's talk some administrivia as you know midterm two is coming up uh they do seem to come rather frequently um i guess the upside is there's no final so that's good but um timing is five to seven pm uh unless you talk to us about a conflict uh the conflicts with uh 170 are the same as they were last time which is you're going to take the 170 exam after 61 uh 162 and uh you will have heard about that or asked us about that if you're not sure um other conflicts need to have been resolved already so we've talked to several of you um there may be a couple of outstanding ones that we know about that we're still trying to work out um all right uh topics are going to be up until lecture 17. so just keep that in mind so certainly today's topics are going to be there as is potentially monday's topics as i've mentioned before we're going to require you to have your zoom proctoring setup working so you must have screen sharing audio and camera working and no headphones unless uh you have explicit dsp uh allowances for headphones okay so try to get your setup all debugged and ready to go uh review session is going to be next tuesday um timing is going to be seven to nine pm uh zoom details will uh be announced on piazza if they haven't already i forget and uh questions about the midterm um by the way i'm glad that we don't have to have a final on the birthday of the person that's chatting there so happy not yet birthday nicky so um do we have any questions about the midterm or are we good no final no only three midterms okay there's a last midterm which technically uh assumes that you remember um concepts from uh first and second midterms all right so um don't forget i have office hours two to three come shoot the breeze talk about whatever you like to talk about talk about operating systems talk about uh life the universe and everything if you wish um these office hours are not necessarily for helping you with um lab assignments and so on but definitely come talk to me about high level ideas or lectures or whatever that'd be great um otherwise i'm just sitting here with my zoom up and doing other things so come come talk uh let's see the other thing i wanted to mention is make sure to do your peer evaluations we talked about this last time but uh the basic idea here is you get 20 points for each one of your partners that's not including you so for instance in a group of four you'd get 60 points to give out to the other partners and um you're going to give them all out i've had some people say well can i you know not give them all out or whatever no you got to give them all out and this is a an evaluation of your your evaluation of the relative uh effectiveness of your partners if you're completely happy with them everybody gets 20 points that sounds great if you're less happy you could give 18 to 1 and 21 to the other two but notice the sum is still 60. okay and everything is validated by the ta and the end of the class so your ta also knows the dynamics of your of your group so make sure they know that and in principle the project grades are zero-sum game so if you're out there and you're not contributing to the project at all it's quite possible that your points will get uh redistributed to your other partners since you've given them extra stress as a result because this is a project class so i'd much prefer to have 20 points across the board for everybody and so let's have that as a as a goal all right pierre the peer evaluations are not about giving yourself points any points all right your other partners give you points every term somebody tries to give themselves you know they've got 60 points to distribute they try to give 59 to themselves and one one of their partners and zeros everybody else it just doesn't work that way and we're going to ignore the 59 points you give to yourself and rescale everybody else so just do the right thing and hand out all the points to your partners okay last elections coming up all right don't forget to vote if that's an option for you i mean this is one of the most important things you can do in the united states don't miss the opportunity i don't need to tell people that this is the probably the highest stress most important election for lots of people um those of you that can't vote uh i i apologize my condolences to you um but uh this is all the more reason that those of us that can should should do that all right and you know vote your vote your mind it's the important part is that you participate that's the most important thing okay and don't put your ballots in the fake ballot boxes in southern california use the post office or something okay good now so let's talk about replacement okay so page replacement policies uh why do we care well i think my uh effective access time slide hopefully gave you a good why we care the replacement is always an issue with the cash but it's particularly important for pages because the cost of being wrong is really high okay the cost of going to disk is million plus instructions if you're wrong in a hardware cache that that uh is uh going to dram the miss time to dram is not that high relative to other things and so the the cost of being wrong there might be less um the cost and that was we were talking about things like random replacement working out pretty well most of the time when you're talking about going to disk random is really not great okay because you're gonna do uh the wrong thing and there's so many better things you could do in terms of picking a page to throw out all right so let's talk about some uh simple policies right you can imagine fifo comes into play this sounds like what we did with scheduling right we started with fifo you throw out the oldest page and you're gonna be fair because you're gonna let every page be in memory for the same amount of time okay so this sounds good except that it's very bad for the following reason it may turn out that the page that was admitted into the dram a long time ago is still used uh every other reference and so the fact that it was loaded right away but then is referenced every other time means that you're going to do very definitely the wrong thing if you throw out the oldest page because eventually you're going to throw it out even though it's probably the most frequently used page okay so fifo seems like it's probably a bad idea okay fifo's been a bad idea with scheduling in the past and it certainly seems like a bad idea as a replacement policy here random we brought up as a replacement policy in the hardware cache instance last time or the time before that this one was better than you'd expect in the case of associative caches in hardware okay and so the idea here you pick a random page for every replacement and this is a good solution maybe for the tlb because it's ha it's fast okay but the tlb when you miss you go to um through the mmu to do a page walk page table walk and so maybe this is an okay policy there because the cost of a page table walk may not be so bad okay but it's still pretty unpredictable and it's really not a great policy for page replacement because you're you're likely to randomly pick uh something bad as likely you uh you are as to pick something good there okay this is uh my favorite guaranteed not to exceed policy okay this is called min and if you remember uh the srtf policy which was uh you know if we knew the future we could um you know pick the best task to schedule the shortest remaining time first to schedule here min is the same idea we're going to replace the page that won't be used for the longest time in the future okay and this is a great policy for paging for page replacement because it's provably optimal but of course once again you can't really know the future okay so min is going to be um our you know yardstick against which we're going to measure other policies to see how close they get to mid and you know a little hint about what's good there is going to be well the past is a good uh predictor of the future okay so this is not lru right so lru may be a good policy that's sort of like min but min is replace the page that won't be used the longest time in the future so it's not lru right is if i knew the future i'd pick of all the pages i've got i'd pick the one that i'm going to use longest in the future and that's the one i'd throw out lru is the least recently used page which is going into the past and trying to make a prediction uh based on the past all right so these are these are little different things and as you you know as you've already figured out here lru is going to be an approximation to min okay it's going to be a way of trying to use the past to predict the future all right good question so min is not lru so let's look at uh the next one of course is lru and this is the replace the page that hasn't been used for the longest time and programs have locality so if something's not used for a while it's unlikely to be used in the near future and it seems like lru might be a good approximation for men and most of the time it is okay now let's ask ourselves how we would actually implement lru so obviously we can't implement min right min is a an ideal oracle that lets uh has us use the future we don't know how to do that but how can we do lru well we just put all the pages in a list and uh you know the tail is the least recently used one and every time we use a page we move it to the head and so the thing at the head is the uh most recently used page and the thing at the tail is the least recently and when we're looking for something to replace we grab the tail okay so this sounds great except this is very much not great um and the reason is that uh every reference requires us to move the page we're referencing to the head of the list so that means that every load or store from dram potentially has to rearrange a bunch of items in the linked list to put the page we just referenced back to the head and so um this is basically not going to be an implementable policy in any way that avoids making loads in stores really slow all right so um i'm going to pause there for a second just to make sure that's clear to everybody because in order to do lru every loader store has to take the id of the page and somehow rearrange it so it's at the head of the list which means multiple loads and stores are required per loader store to come up with lru okay now another thing you could imagine maybe is keeping a time stamp on every page so that every time you reference it you stamp it the problem then is of course that you'd have to sort by stamp to figure out which one is the oldest least recently used page in that time frame and that's hard to do as well okay so it seems like we're being stymied here we want lru because it seems like a good uh replacement for the oracle min but now we don't know how to do lru and so um just to give you a uh a preview we're going to find a way to approximate lru in a way that works mostly as well as lru would if we could implement it okay and thereby give us a way to get closer to min than we might be able to get otherwise okay so that's our that's our little bit of uh foreshadowing okay um so in practice people approximate lru um and we'll tell you how okay but let's let's look at some of these policies just to understand so i want to go through some simulations just to see what happens on a request pattern so let's let's set this up i'm going to i'm going to have a really limited uh processor architecture here which has three pages of dram and four virtual pages in the address space so the virtual pages are called abcd and uh the processor is going to do c a b d a d b c b see that's a that's got a great beat to it right so let's see if we can figure out what fifo would do all right so here um we have the three pages one two and three these are the physical dram okay that we've got and when we do reference a that's in virtual memory and at that point we need to map a to some page now that's really easy right now because i don't have any assignments of dram pages to address a so i'll just pick the first one okay so now a is in dram page one b is in dram page two so i'm just working my way through pages because i'm doing fifo replacement here actually i'm doing fifo assignment i haven't replaced any yet right so c uh grabs the third one so now if you look here uh we now have all of our pages are currently assigned in the page table and we happen to know that address d in the page table is marked as invalid how do i know that address d is marked as invalid in the page table anybody figure that out why is page d marked as invalid it's never been accessed right what else um it's not in the dram how do we know that how do we know it's not in the dram okay how do we know it's invalid i'm giving you yeah all of our page frames are assigned to other addresses right page frame one we know the page table gives it to a page frame two gives it to b page frame three is given to c we know that there is a slot in the page table for d but because a b and c are already uh taking up all the physical pages we know that d has to be invalid okay or the operating system is broken but let's assume that that's not true for the moment right so when we get to a what's great is the mmu gets to find an entry to a and in fact we we can even guess that um the page table or the tlb already has a in it we can imagine so not only did we get the mapping for a back here at the first cycle but we also set the tlb up and so this works fine okay we get to b that works fine we get to d all right now d is a miss in the page cache why well d is going to be looked up in the page table we're going to see it's invalid and at that point we get a page fault and we're going to have to do something here and what page are we going to pick to replace for d a why yep because we're doing fifo right so we were doing page one page two page three and now page one is the oldest page and so voila we uh we pick we uh overwrite page a um and assign this page frame one back to d okay okay now a comes along and look what happened here a is going to be another page fault okay because we got rid of a and if you notice that means we've got a sign a and we're doing a fifo assignment so a gets assigned to page frame two d well that's good we don't have to do anything b well b is now gone and so we're gonna assign b down here c well c is now gone so we have to assign c and then b uh has no fault okay and so if you look here we've got one two three four five six seven page faults when a b c a b d a d b c b uh is encountering a fifo page replacement algorithm okay so there's seven faults one two three four five six seven and uh notice when we're referencing d here replacing a was the wrong thing to do right because we were going to immediately need a again so if we had a better replacement policy maybe we could avoid this page fault for a and maybe we could avoid this page fault for b okay so fifo here is not doing well let's look at min okay which by the way is going to do the same thing lru does in this case but let's just think about min for a moment so here we go a b c a b d a d b c b says here we go ready so a is going to do the same thing b is going to do the same thing c is going to do the same thing now you might say well aren't you doing fifo replacement well the answer is i'm just grabbing things off a free list i haven't replaced anything yet i've just sort of done the assignments so now a works right there's no page fault b no page fault and now we come to d all right and min says pick a page to replace that's going to be used farthest in the future okay so if you look in this reference stream up here the thing that's going to be used farthest in the future is not a it's not b it looks like it's c right so c is the page that's going to be used farthest in the future which is why we choose to replace c uh with d okay so that's min min is looking into the future looking into your crystal ball tell me what's the page that's going to be used the farthest in the future okay and now we get to d or we get to a again and a is in place right we get to to d d is in place we get to b b is in place why did this work out so well because we can we know the future okay c well we get to c and now c is no longer there what do we do well at this point we don't have much in the future to go on and so we're just going to replace a great so as uh chris stated in the chat um page frames return refer to physical memory we only have three pages in physical memory and the virtual the fact that we have four virtual pages means we have more virtual memory pages in use than physical pages available which is a typical reason for a cache right we've got more virtual pages which are out on disk than physical pages which are like the cache and therefore every page fault is pulling things off the disk and bringing it into the cache okay so yes correct now the other thing i wanted to point out is uh when we got to d here which one which of these pages was the least recently used okay let's look back okay so if i back this up and we go to d notice that both a and b were used recently so c is the the least recently used page so if we had a way to do lru d would have picked number three also so this is this is a good illustration of why lru is a often at least a good approximation for men it's not always the same thing okay so this case we have five faults instead of uh the seven in the previous example whereas d brought in is brought for the the page not referenced farthest in the future so what does lru do same decision making all right are we good now is lru guaranteed to perform well consider the following a b c d a b c d a b c d well you can imagine what's going to happen here i'm just going to walk you through so here is a case where not only where lru performs exactly the same way as fifo does and that's because we have three physical pages but four references and we're always going a b c d a b c d a b c d and as a result we get this cascading page fault pattern so um what's interesting about this is this it's a it's a lovely pattern i would agree the thing that's uh interesting about this though is this is the kind of pattern you can get when for instance um if this is the uh the page cache which we'll talk about later when we talk about file systems and you're walking through a file system by doing a recursive grep or something you can also end up with this uh situation where you're always page faulting and none of the none of your cache is helping you at all okay what i want to show you here this is a fairly contrived example with a working set of n plus one on end frames what's interesting here is that min though does better because at the point we get to d min will actually make a different choice okay so we have one two three four five six so men will actually only have six page faults rather than whatever we had up 12 up there okay so um so min is still the uh you know the oracle guaranteed not to exceed best case of which lru mostly behaves like men okay just not always all right questions now uh i'm going to state up front here that lru mostly performs very well okay so i i gave a contrived example here the question that's going to be important here is is you know how do we make lru if we can't do it okay and what i'm going to show you first though and we'll talk about how to make an lru is this graph of page faults versus number of frames if you look here we have three frames right um so if we if we vary so we're at the three point in the previous slides but if we were to add some more frames presumably our number of page faults would come down that's a desirable property that as you add some extra memory to that process the overall hit rate goes down and the question is is it always the case that you add more frames and the hit rate goes down and unfortunately the answer is no okay there's something called the ladies anomaly and certain replacement algorithms like fifo don't have this obvious property you can actually add some more physical memory and the the fault rate goes up all right and i'm going to show you this so does adding uh memory reduce the number of page faults the answer is yes with lru and m and min but not with fifo okay and so here we have a reference pattern a b c d a b e a b c d e and notice that we've got three page uh physical page frames and five virtual ones now and what's interesting about this is if we add a fourth page frame uh physical page frame and we do the same fifo assignments you can work this through on your own what you'll find is there's actually more page faults even though we've added more memory to that process okay which is a little counter intuitive and it turns out that um fifo is just bad for many reasons not the least of which is that fifo suffers from belated's anomaly okay and so contents can be completely different with uh with uh adding more memory and that's kind of part of the reason this has a problem in a co in contrast with both lru and min when you add some more physical pages things always at least stay monotonically go down they may stay constant for a little bit and then go down but um this is why we are going to abandon min as a desirable uh policy from this point on okay questions now did i say yeah i said i meant fifa were abandoning fightful what did i say i'm sorry whatever i said there i mean we're abandoning fifo from this point on we're not going to abandon min of course we couldn't implement min anyway all right thanks for catching that now so how do we approximate lru well there's something called the clock algorithm which i'm sure you've all heard about so the idea here is we take every page in the system and we link them all together okay and so every physical page is in this loop and we're going to have a single clock hand that's going to point at a page and what happens here is we're going to advance only on page faults okay and we're going to check for pages that aren't used recently and we're going to mark them in a way to keep track of that okay and so what we're really looking at is not the least recently used page but a least recently used page okay an old page and so um how do we do that well the details are pretty simple here and i'm going to walk you through them but the idea is that every page is going to have something we're going to call the use bit now intel calls is the accessed bit let's call it use for the moment here and that use bit is something where the hardware sets the use bit in the page table entry or the tlb when the hardware is uses that page so either a read or a write to that page will set the use bit okay now the hardware never set it clears it never puts it to zero and so um that's gonna be up to our software it's gonna be up to the os to set the use bit clear the use bit to zero underneath the clock hand and so what will happen just uh abstractly here is we're gonna put the use bit to zero and then when we come all the way around we'll take a look and if the use bit is still zero then we know that that page hasn't been used and all the time it took the clock hand to go all the way around and so at that point we're going to call this an old page because it hasn't been touched in the time that we went all the way around okay and again keep in mind that we only move the hand when there's a page fault so going all the way around meant that we've had enough page faults to walk through um all of our memory okay now if uh the clock hand looks at a page and its use bit is one that means that that page has been touched since the last time that we were there and so we'll set it back to zero again and then we advance on to the next one and we check in the next one and eventually we'll find one that is a zero use bit at that point we know that um this is a good candidate for replacing because it's an old page okay so that's the clock algorithm i'm gonna pause for a second here all right so notice that the use bit gets set to one by the hardware but cleared to zero by the operating system so it's a funny bit it's a set by hardware cleared by operating system all right now some more details notice that what i said here is that you first check the abuse bit and if it's a one you set it to zero and you keep looking for a page because you've found a page that's not an old one yet the question is will you ever find a page or will you just loop forever okay and the answer is you'll always find a page because notice that we don't let any uh processes run so while we're trying to find a page it's only in the operating system all the other stuff is is suspended and therefore well we keep setting everything to zero and in worst case we may work all the way around but now that page that we set to zero is one we end up replacing uh right away okay and you can imagine that if we have to go all the way around before we find something then maybe we have a lot of thrashing going on okay but this algorithm is guaranteed to find a page as i've stated it here okay now what if the hand is moving very slowly well that's actually good right why is it good because there are not many page faults because i only do this on a page fault um and it either means that the page faults are not coming very frequently or i quickly found a page in either case it means that i'm not walking my way through all of the pages just to find one to replace so that's a good sign if the hands moving quickly that means we have lots of page faults or a lot or lots of reference bits set and that means there's a high access of pages and a lot and or a lot of page faults either of those meaning i've got some trouble i've got what i would call memory pressure okay um so one way to view this clock algorithms is a accrued partitioning of the pages into two groups young and old okay and um we're gonna throw out somebody a page from the old category okay now you might say well you know why not partition into more groups well we can do that there's something called the nth chance version of the clock algorithm all right and this is basically give a page n chances before we throw it out and the idea is the os is going to keep a counter on each page and it's going to be the number of sweeps of this page and so on a page fault you check the use bit and if it's a one you uh you clear it and you also clear the counter because uh this page was used in the last sweep and so it's a young page we're going to totally uh discount it on the other hand if it's still zero what that means is it's uh hasn't been touched in a whole iteration around the loop but rather than the the vanilla clock algorithm what we're going to do instead is we're going to say oh uh let's give it another chance and we'll increment our count on that page and only if we hit n do we replace it okay so what this nth chance is doing is it's it's saying that before we replace a page it has to be not used for end loops of the clock okay so that basically means the clock hand has to sweep by input times without the page being used before it's replaced how do you pick n well it's interesting is if you pick a really large n you're effectively getting a better approximation to lru excuse me because now we're dividing pages into not just two groups young and old but groups that vary by what the vers the value of n is and as n gets larger we're dividing it into more and more uh categories and if it's really large you kind of get a better approximation to lru okay but it's really expensive because you have to go around many times before you find something to throw out why pick a small n well it's it's much more efficient okay so you might imagine a you know a small number like n is two or three uh not n is a thousand okay and um here's a particularly useful way of using n where we're gonna keep n to be very small and the the thing we haven't talked about at all up until now is when we throw a page out we need to make sure that it doesn't have data in it that we that we can't afford to lose and therefore we need to write back to disk that means that the modified or dirty bit is going to be set okay and in that instance if we go around the clock and we pick a page that we're interested in but it's dirty what we could do is start it on the process you know start it being written back to disk and then wait to go around again and if it's cleared at that point then we know it's a clean page and we can just throw it out okay and so one idea here is basically that clean pages you use n equal one and you immediately replace them if the value of use is zero when you look at them otherwise if it's a dirty page then what i'm going to do is i'm going to start it being written out to disk and i'm going to wait for n equal 2 namely i'm going to go all the way around again before i replace it and hopefully when i do that it's now a clean page by the time i've gotten all the way around all right so that's called the nth chance good now let me bring the intel pte in this and also talk through the page table entries again i've shown you this before um we've really got four bits of interest uh to clock type algorithms p or pr the present bit or the valid bit those are called different things on different architectures the writable bit or w you see here basically says uh that this page can be written when it's a one there's sometimes there's the opposite sense uh in which you have a read-only bit so when it's a one the page can only be read but not written i mean the intel architectures and a lot other ones there's a w bit which has to be equal to one before you can write uh the access bit or the use bit we've already explained that it's zero if the page hasn't been accessed since the last time the software set it to xero it's a one if it has been accessed and it's been set to one by hardware if it's been uh accessed since the last time it was set to zero and then there's the d bit or dirty bit it's also sometimes called modified uh and if d is zero then the page hasn't been modified since the page table entry was loaded and you pulled it off of disk the page itself and if it's a one then it's been written to since then and so these four bits pwa and d make for a much more um complete set of bits required for page for paging right so we clearly need the present bit to know whether a page is in memory or not the writable bit uh is basically how we allow to have some pages read only and some written i'm going to show you another interesting way to use that in a moment the access bit and the dirty bits we've already talked about to some extent but let's see let's uh look at some variations so some variations might be do we really need a hardware supported modified bit or dirty bit and if you think about it for a moment once i've paged something in and i've marked the page table entry as valid and uh writable for instance then i'm going to turn the processor loose and it's going to be allowed to do reads and writes to that page all at once okay so hopefully you can see why that question i asked on the following slide is a good one because you can imagine that um if i uh if i didn't have this done in hardware then how would i know that the page has been written to because i'm just letting reads and reads and writes loads and stores go against that page and the operating system isn't uh involved okay so unless i have that modified or dirty bit in hardware i won't know that information and so that's why this question comes up do i need it and it seems like the simple answer would be yes but the real answer is is not it's no and it's because we can be clever in how we use those four bits okay so we can emulate it using the read only idea or the w bit right so what we're going to do is we're going to we need a software database of which pages are allowed to be written we kind of needed that anyway so for every process we know which pages are marked read only which ones are writable um and we need that as we page things in and out from disk and so on so we assume we already know that and so we're going to let the mmu help us so that the operating system gets to take over when we need to record information okay now the question is does the cp on the on the chat here is does the cpu set the dirty bit if we overwrite data with itself yes so uh this the cpu doesn't try to distinguish the notion that you wrote the number three over the number three all it knows is that you wrote it and by the way in most cases uh that's a particularly good simplification because it's very rare that you completely overwrite uh everything in a page with exactly the same value all right so so the dirty bit just means that i i executed a write instruction against that page now what we're going to do is we're going to tell the mmu that pages are have more restrictive permissions than they really need to and so what do we do well we're going to mark pages that could be written as as okay and if we do that then we know the moment they try to write we're getting a page fault and now the operating system can record the fact that we've written okay so this is a new algorithm i'm going to call this the clock emulated modified bit or emulated m and initially we're going to mark every page that's read only with w equals zero even the writable ones and we're going to clear all the software versions of the modified bit that we're keeping in the in the operating system somewhere and notice uh why do i say the software versions because we're assuming for a moment the hardware doesn't support modified bit okay so the moment we cause a write what happens is we get a page fault and if the writes allowed we check up in our database then the operating system is going to go ahead and set the modified bid in software in the operating system and then mark the pages writable and so from that point on we let the writes go at full speed without any page faults but we've already recorded the fact that somebody has written okay um and so whenever the page then gets written back to disk we'll clear that modified bit back in software and mark things as read only again to catch future rights okay so this is hopefully pretty clever here as you see what we've done is we've decided to play with the permission bits on the page table entry to give us page faults that are events that we can then use to track whether the page is dirty or not okay now could this cause twice as many page faults yes all right this could cause an issue with uh you know you get a page fault when you pull the thing in and then you get another page fault when you write it so that's twice as many page faults uh hopefully notice the page faults the the second one the one that's on this page doesn't require going to disk it's a simple event into the kernel and back out again so that's a fast page fault as opposed to the page fault that pages things in off of disk which is a million instructions okay so um so basically trapping into the kernel if you set it up properly can be reasonably fast for things like this okay but as you as you have identified we are page faulting twice as frequently as we would otherwise here's a here's another question do we really need the hardware supported use bit okay so once again um so uh so does the first write happen here well what happens and that's a good question i'm glad you you mentioned it so the first write caused the page fault right um correctly um so we set the modified bit and market is writable and then we return back to the pro the um the program of the process excuse me which is going to retry the write okay because the page fault is a synchronous operation which occurred on that right and therefore that right is not going to make any forward progress at all and so when we return to retry it we're going to retry the right and it'll succeed on that second time through all right and uh so you're gonna actually have two rights one of which caused the page fault uh and the other of which actually causes the right to happen and then the process gets to go forward so this the idea about do we need a use bit no we can emulate it in the same way above in fact what i'm going to show you is how could we get by without a use or a modified bit so that effectively the only thing we've got is the valid the valid bit and the uh writable bit okay and how do we do that well here's the clock emulated use and m bit our m algorithm and we're going to mark every page is invalid regardless of ones that are valid or not okay and notice that um we're gonna mark all the pages is invalid even if they're in memory and we're gonna uh clear all the emulated use bits and modified bits to zero uh nikki i'll answer your question just a second um and so now what happens well it doesn't matter if we do a read or a write we're going to cause a page fault because we mark things as invalid okay and so we'll we'll trap to the os on any access and at that point we're going to definitely set the use bit equal to one because we we had some access it doesn't matter whether it was a reader or write the use bit gets set to one and then we can take a look at what it was because we know the address that this was at and we can know whether it was a reader or write that was attempted and if it's a read we're going to mark the page as read-only at this point and uh meaning w to zero and that means we'll catch future writes on the other hand if it was a write we're going to just set the modified bit to one and mark the pages writable so because we set w to 0 meaning it's read only then if we happen to write we'll catch it and be able to set the modified bit okay and then when the clock hand passes by just as i mentioned the clock algorithm earlier we're going to reset the use bit to zero and mark the pages as invalid again okay and the modified bit gets less left alone until the page gets written back to disk so the question that was asked in the chat here which is a good one which is well this doesn't seem useful i'm saving one or two bits so why are you going to all this trouble so the answer is i'm talking about architectures like processors that don't have a use bit or a modified bit that's implemented in hardware okay intel ones that you guys are dealing with have the advantage that they have both a use and modified bit or an accessed and dirty bit those are done by the hardware if you had an architecture which didn't have them like the vax which i'll mention in a second then you got to do something else otherwise you're going to get incorrect behavior and this is showing you how you can get by with just the valid bit and the read write bit to emulate modified and used okay but we have a lot of page faults going on here as was identified just to simulate use and modified and so remember that the clock algorithm is just an approximation of lru so maybe we could do better if we don't have a use bit or we don't have use in modified bit maybe we could do something better by doing something slightly different than the clock algorithm and the answer is yes we can do something called a second chance list okay so the second chance list divides the pages of a process into two categories i'm going to call them green and yellow or directly map pages and second chance list pages and things that are green are pages that are mapped writable and therefore whatever the processor does it won't cost page fault the second chance list are ones that are in dram but they're still marked invalid so that means uh if i get a page fault on them i'll be able to mark them as valid in a moment without going to disk but for the moment they're they're yellow here because they're marked invalid okay and let's look at this a little bit so um the access pages and the access the active list are done at full speed otherwise we page fault and we deal with stuff in the yellow page list and we're gonna manage the yellow page list as an lru list for real and the uh green pages since they're directly mapped we don't get any events on them and we'll get to just do them at full speed okay so let's look at something here so suppose we get a page fault uh because we access some page that's either in the yellow group or on disk let's assume for a moment that it's in the yellow group what's going to happen is these pages in green are my current directly mapped pages i'm going to get rid of one of them and put it at the end of the lru list and yellow i'm going to take the page that i was looking for i'm going to put it in the green list at the new end and i'm going to mark it as valid so that page fault uh basically was a page fault because i wanted to get one of the this page here that's in yellow i wanted to access it but it's invalid mark it it's marked invalid in the page table and so instead i get a page fault and what i'm going to do is i'm going to do a swap so i'm going to put this green thing that's at the the back end of the the fifo green list i'm going to put it on the lru list on yellow and the um the page i actually page faulted on is going to be in green and so what you notice here is that the green list is is managed fifo but the yellow list is managed lru and if the yellow list is big enough then i'm going to effectively get something very close to lru without having to emulate the clock algorithm and have page faults all over the place okay now the other interesting thing here is if the reason for this page fault was because of something on disk what i'm going to do instead is rather than this yellow page going down to the green what i'm going to do instead is i'm going to take the least recently used item off the end of the yellow list throw that out and bring the uh the page off of disk and put it at the new end of the green okay so this particular algorithm called the second chance list is basically keeping the pages that are really really uh actively accessed in the green list and then when it throws something off the end it puts it in the yellow list where they're sorted lru but then they're given a second chance to be brought back that's why it's called the second chance list okay and you don't have to scan through the second chance list because i'm going to manage the second chance list as an lru list so this will be just a a single pointer list that keeps track of the old end and the new end okay and so you notice all i really have to do is whenever i take something out um i have to be able to close up the list and whenever i put something in i have to be able to put it at the uh the new end of the lru and the way you know that the items in the second chance list is because a you got a page fault so it's not in the green list and b remember that database i mentioned earlier where you keep track of everything well you know that it's in memory as opposed to on disk okay so this has got some some data structures that are keeping track of where everything is okay now so how many pages do i put in the yellow list versus the green if uh if i put nothing in the yellow list then this goes back to fifo because that green list is fifo if i put all of the pages in the yellow list i get lru but i get a page fault on every reference okay so the expense of uh managing this is 100 lru is i get a page fold everywhere and i can decide kind of how much of the green list to have to avoid those page faults okay and i pick an intermediate value and the pros of this is that there's fewer disk accesses than that emulated clock might be and the page only goes to disk if it's unused for a very long time the cons are there is a little bit of an increased overhead with trapping to the os in this case and with page translation i can basically adapt to any kind of pattern the program makes and later we'll kind of show you how to use the page translation and protection uh to share memory between threads and that's going to be something we'll have to talk about a little later the interesting i want thing i want to point out here is that the second chance list was on used in the original vax operating system and there's some funny history there so strechler who's the architect of vax which look it up on google you'll find it's a very famous uh architecture from digital equipment corporation uh asked the os people do you need a use bit and they said no and so then when they got around to trying to implement uh replacement policies i was like oops yeah we really did need a use bit and at that point strechler uh got blamed for uh screwing up the architecture by forgetting to put a use bit in but in fact he was told he didn't need it the vax operating systems folks came up with a second chance list algorithm as a way of avoiding the use bit so you don't have to you don't have to do the clock even though you can't do the clock algorithm you can do the second chance list algorithm which is still pretty good all right now bear with me for just a moment um this clock hand the clock algorithm as i've been telling you which by the way we can use on an x86 processor because we have use in modified or accessed and dirty bits the way i told you about it is i said well there's a single clock hand and you advance only on page faults so that means that at the moment i have a page fault i got to go to the clock algorithm to find a page maybe push it out to disk because it's dirty and i got to work my way through that clock hand to find a free page so that i can start my access to the disk to pull it into data or to pull it into dram so that sounds like a dumb idea in fact that's not the way people do that what happens is there's a free list and that free list is filled up by the clock algorithm and so there's a demon in the background that looks for free pages to keep the free list uh full and things that are put at the uh the the head end of the free list if they're dirty they get written out to disk and so in that instance as long as they're um get written to disc by the time they work their way down to the front of the free list then i anytime i get a page fault and need a new page i just pull it off the head of the free list because it's clean okay so this idea of a background clock algorithm is what's really done in modern operating systems they often call it the page out daemon and the dirty pages get paged out by the time we get to the head and just like the second chance list if it turns out i have a page fault that needs one of these pages then i just pull it back off the free list and put it back into the clock and i don't you know as if nothing happened so all of these things in the free list here are second chance pages okay so i could probably color these as yellow if i wanted to be consistent with the second chance list algorithm okay the advantage here is it's much faster on a page fault you can always use the page or pages immediately after okay so um the last thing i wanted to say here and then we'll we'll uh finish for tonight and we'll pick up uh on monday is when you evict um so the so the free list is separate from pages in memory um that's the question in the chat these are still in memory so they're in dram but they're marked as invalid so they're not in the clock they've been taken out of the clock put into the to this free list marked as invalid and the reason it's a second chance list idea is because we can pull them back in if we need them okay so um so when you evict a page frame one of the things that you may not have thought about is that you actually have to figure out all of the processes that point to that page frame and that gets hard in the presence of shared page pages because when we fork processes we have shared memory there's multiple processes whose page tables all point to the same page and so there's something called a reverse mapping mechanism which has to be very fast and basically lets us go from a physical page back to all of the uh virtual addresses and page table entries that represent that okay and there's many ways to do this you could have a linked list of page table entries for every page descriptor that can be expensive linux has a way of grouping objects together to do a much faster way of going from physical to um to virtual and finding all the processes that own a page okay all right so i'm gonna uh we'll end us for now so we talked a lot about different replacement policies we talked about fifo min and lru as kind of idealized policies uh fifo being simple to think about but being just a bad policy all over men being replaced the page farthest in the future and lru being kind of an ideal prediction based on the past that we can't quite implement we talked about the clock algorithm which is an approximation to lru that we arrange pages in a circle and we use it to find an old page not the oldest page we talked about the nth chance algorithm which is a variation that lets us divide the pages into multiple chunks instead of just two we talked about the second chance list algorithm which is another approximation of lru that was used on the vax when you don't have a use bit and next time we'll start uh talking about the working set a little bit more um to understand better uh how to figure out how much memory to give each process all right i think that's good our time is expired here i hope you guys all have a great weekend good luck studying and i guess we will see you on monday ciao everybody |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_11_Scheduling_2_Case_Studies_Real_Time_and_Forward_Progress.txt | okay welcome back everybody to uh cs 162 we're going to pick up where we left off um basically just before the midterm and we're going to talk about scheduling and so today we're going to talk about a couple of things here um continuing in our vein of scheduling case studies we're going to actually talk about some real schedulers we'll talk a bit about real time and forward progress um so uh if you look uh if you remember from last time we basically talked about the the uh what the descriptions look like inside the kernel when you open a file um and uh we had pointed at this file structure before and uh what i wanted to point out last time was basically this f-op pointer which points to a set of operations and what's interesting about these operations is the set of operations uh includes things like how to read how to write how to open etc and as a result what it does is allows you to have that uniform info face of open close read and write for everything from files to uh pipes to et cetera okay and so that f op structure we talked about basically is why you're allowed to do that the second thing we talked about was device drivers and um we looked at a typical life cycle here for an i o request and what you see here is a request such as a reader write coming in from a user program executing a system call and uh and then invoking potentially the operations that are in that f op structure i mentioned previously and so that request basically the first thing that happens uh for instance for a file reader right is we ask whether it can be satisfied and as we talked about is the reason it might be satisfied already is because the data is in the cache and when we get into file systems a little bit later in the term we will talk about that in great depth but assuming that it can't be satisfied then it goes on uh to get ready to talk to device driver okay and we talked about device drivers as that piece of code that uh talks to the actual device and knows how to do things that are unique to the device the reason i have it circ this sending request to the device driver highlighted in red here is because if it turns out that the device is going to take a long time this is the point which you put the process to sleep or the thread depending on how things are set up and then you're going to trigger something else and you have to schedule at that point okay so then we talked about how the top half of the device driver is that part that runs kind of on behalf of the process sets up the commands etc and things are put to sleep at that point it sends a command to the hardware and then the hardware takes over but the thread the original thread that called the device driver is sleeping now okay and so the hardware um comes along and uh eventually does the access causes an interrupt to happen and we end up in the bottom half of the device driver which is interrupt driven so that top half we talked about a moment ago was the part that um basically is running on behalf of processes coming from above the bottom half is invoked by an interrupt at which point it figures out kind of which process is waiting potentially wakes it up transfers the data and the i o is completed okay so again um one of the reasons that we particularly uh that i particularly talked about this again this time was somewhere between the i o subsystem in the top half of the device driver the process actually gets put to sleep and will invoke scheduling again so that's basically the topic that we've been working on this is a figure from uh either the first or second lecture where we kind of show the idea of the cpu executing some thread and eventually something happens like it's an i o request or a time slice expires okay so we talked a lot about that last time when we talked about round robin or perhaps we execute fork or we wait have to wait for an interrupt because uh you know maybe we um do a signal operation we're waiting for somebody to respond to us and then that's the point at which we have to ask this question how is the os going to pick the next thing to run okay so we've got this ready queue and uh that's got a bunch of threads that are ready to go which one and that's the topic of scheduling and so last time we talked a lot about uh classic scheduling algorithms and the basic idea there is deciding which threads are given access to resources from moment to moment and uh the last lecture and this one and the next one we're really talking about cpu resources but i'll let you know that scheduling can be applied to things like disk drive to you know who gets the most bandwidth et cetera and when we start talking about scheduling disks uh et cetera then we'll move into io at that point but for now we're talking about cpu and how does the scheduling get triggered well it can get triggered by timer interrupts by other i o interrupts it can be uh triggered whenever a thread voluntarily goes to sleep uh like it's you know it's trying to do io gets stuck in the top half of the device driver and at that point is going to read the disk well it triggers scheduling to figure figure out what runs next so this scheduler can get run on all those circumstances in which it's time to take the current thread put it to sleep and pick another one all right so um and then we talked last time about policies for scheduling uh and we talked about three of them minimizing response time maximizing throughput and fairness so the thing about minimizing response time is that's can be very important if you're talking about the response to user input like keyboards etc so things like time to echo or keystroke in the editor we also talked about maximizing throughput so maybe in the cloud where you have these really big compute jobs what's important there is to make sure that the uh hardware is used as maximally efficiently as possible and so that's a case where you don't want a contact switch very often you want to run the machine at full speed maximum cash utilization etc and there's two parts to maximizing throughput sort of one of them is over as minimizing the overhead for example not contact switching too much and the other is using resources efficiently uh cpu disk memory and as you can imagine as we talked the minimizing of response time for users and the maximizing of throughput for compute are sometimes at odds with each other and today we're going to talk a bit about how to uh deal with the the contrast there the other thing that's always in the background here is fairness and that's the question of how do you share cpu among users in some equitable way and fairness here is not about minimizing average response time necessarily because better average response time actually makes the system less fair can anybody uh tell me why that is do you remember why getting better average response time makes the system less fair or anybody want to take a stab at that okay priorities has something to do with it yeah so the to minimize response time we're basically looking for those tasks that run very quickly and have a short burst time and so if we're maximizing response time then we're doing a lot of context switching which means we're we're taking a little bit to the detriment of throughput so here was an example um that we showed here of round robin so that's the simplest thing we can do where we have a timer that goes off every so often that's called the quantum and i showed you an example here of process one two three four on the ready queue the burst time is the time uh from when it starts running to when it does some io and so p1 has a burst time of 53 p2 of 8 p3 of 68 p4 of 24. and we talked a lot about the fact that if we put this in a fifo queue and ran it p1 to complete it to the next io operation p2 to the np3 p4 this is going to be very bad for response time because you could by accident and end up with the longest tasks first and then the short ones which are what the users are waiting for don't get to run and so what round robin does is shown by the scant chart where if we have p1234 on the ready queue and we have a quantum of 20 units of time then p1 runs for 20 units and then we uh stop and put it at the end of the queue and then p2 runs well it can only run for eight because it's only eight long and then p3 runs for 20 and p4 runs for 20 et cetera and so you can just sort of simulate this on your own given the quanta 20 and the burst times 53 8 68 24 and you see what the results are and then we can talk about the various waiting times the the threads had to experience okay so p1 are processes so p1 here had a total wait time of 72. so those are all the times where it's not running before it's done that it has to wait p2 p3 p4 etc we can compute those we talked about the average waiting time here being 66 and a quarter and the average completion time being 104 and a half and so this round robin has a simple it's simple right uh we we cut the tasks off so they don't run for too long and uh the pros of that is it's it's much better for short jobs than if we just run every job to completion but the contact switching can start adding up and so we did talk last time i encourage you to see that lecture if you were perhaps studying for the midterm but one of the things we have to do is we have to balance this rapid switching with the overhead okay and what we would like is in a typical system we talked about how the switching overhead is somewhere between 10 and 100 milliseconds uh excuse me the the quanta is between 10 and 100 milliseconds and the switching time is like 0.1 millisecond and so we're trying to keep things at under one percent overhead in that instance okay so uh that was round robin we also talked about an idealized uh thing about what if we knew the future so the problem with round robin that we noticed here is it still isn't the most responsive because for instance p2 which is the shortest job ideally would run first because perhaps that's a user that only needs a few cycles every keystroke okay but of course the problem with that is that what what's the biggest issue with putting the shortest run first okay so you guys are thinking too hard here so the biggest problem is we don't know what's the short job right so so the biggest issue here is we don't know the future okay but if we did and you're right it does cause start starvation if uh if we always manage to get the first job the shortest job first over and over again we could starve out the long ones but this biggest issue is the future and so we talked about something where we mirror the best first come first serve or best fifo by always putting the short job first okay and that's called shortest job first okay which is run whatever job has the least amount of computation to do or the shortest time to completion first stcf um there's a there's an interrupting version of this called the shortest remaining time first which is a preemptive version where if a job arrives and has a shorter time to completion then it gets to run and we talked about this and uh basically you can apply this idea to the whole program or to the cpu bursts and the big effect is on short jobs so that um the short jobs really get to run quickly and the long jobs mostly don't notice unless there's so many short jobs you get starvation and so this is a great idealized scheduler if we could only do it and the biggest problem was of course how do you know the future so the pros and cons of srtf are one it's optimal from a response time standpoint so if you're going to measure any real scheduler against an optimal srtf's good one it's very hard to predict the future we talked about some options last time about things like moving averages and coleman filters somebody brought up the idea of some sort of machine learning to try to figure out predict how fast the jobs were and the obvious other thing is it's unfair because the short jobs get to run in preference to the long ones and if you do that too much you starve okay good were there any questions on this before we now move into some new material i thought i'd make sure that i got everybody up to speed on what we did last time okay so now how do we handle a simultaneous mix of different applications so today's systems have a mix of user interaction and long-running things so your cell phone is busy uh dealing with your swipes and taps while at the same time it might be actually computing uh the data in the background from your latest uh exercise session figuring out kind of uh what sort of machines you're on okay that's sort of machine learning kind of stuff so that might be trying to run with full cpu while the other quick things need to have your um you know respond to you quickly and so uh this is an interesting thing right because the different app the different schedulers we talked about last time some of them are ideal for throughput so fifo where you just run everything to completion or until the next time it does io is great for throughput not so good for responsiveness so if we want a mix of interactive and high throughput apps we have to figure out how to best schedule them we have to figure out how to recognize one from the other and you know in this you start asking the question do you trust an application that always says it's interactive and use that to give it priority okay that seems uh like it's going to get abused right you're going to end up with these apps coming off from the app store that always tell the cell phone that they're the the most important app in the world and of course nobody's ever going to get any other work done right and buried in this of course is the question of should you schedule the set of apps identically on servers workstations ipads cell phones is every platform the same and you can imagine probably not so here's this burst time graph which i showed you last time and if you remember what this measures is this measures uh burst frequency of tasks with a given burst time where burst time is the time from when the thing starts running to when it does its next io and the reason we typically have a burst a peak toward the low end is because there's a lot of user interaction interactivity and so as a result you tend to have a lot of really short tasks and then you have a long tail full of long tasks and so maybe we might imagine that short bursts reflect interactivity which reflects high priority somehow and this in fact is the assumption encoded into many schedulers many of them decide that apps that sleep a lot and have short bursts must be interactive okay and so they give them high priority things that uh compute a lot and don't have a short burst should get lower priority with the notion that somehow they're going to notice it less because the short bursts are going to get out of the way quickly okay and that simple heuristic has been has been used a lot uh it turns out it works pretty well but it's really hard to characterize apps for sure because you know you have these uh the exception proves the rule you know what about apps that sleep for a long time and then compute for a long time or what about apps that have to run under all circumstances like real time uh apps we'll talk about that later in the lecture okay so um but let's look at a common structure that was used a lot in uh schedulers it still is used in a number of them this is called the multi-level feedback scheduler and what we do is rather than having a single ready queue we have many okay this particular diagram shows you three and the uh the top queue is the highest priority and the low the bottom queue is the lowest and things go in between and we also vary things like the quantum okay and so the quantum here is uh you know this is how how often we do round robin so we do run robin quickly at the top more uh you know we don't break it up quite as quickly as we go in the middle and then we have fifo or first come first serve at the bottom now there was a question in the chat here about uh you know is there a scheduler based on inference from machine learning models sure people have tried all sorts of things the trick with machine learning is you have to make sure that the time it takes to classify an application uh doesn't become overhead that completely swamps the advantages of your scheduler okay so you have to make sure that whatever you do is fast and machine learning isn't always fast machine learning is something that's done over time and so now you start talking about some interesting trade-offs between how accurate you are versus how much time it takes so this is uh this multi-level feedback scheduler is another method for exploiting past behavior okay it was first used in the ctss system so this is a long time and so i said multiple queues each with a different priority higher priority queues are often considered the foreground tasks the lower priority at the bottom here are background and every queue has its own scheduling algorithm and here's the trick we start everybody out at the top and uh you know they're running with a round-robin uh quantum eight and if they run so long that that they get interrupted before they do i o then we decide that maybe they have more computation and we move them to the next q and then they down in the next q get run with quantum 16 at slightly lower priority and if they also exceed that then we move them down into the fifo queue and the minute the minute that we do some i o we move them back up to the top okay all right everybody with me on that and so long running compute tasks start at the top and they get demoted to low priority automatically and things that have a lot of short tasks tend to float to the top okay now one thing to note about this is it kind of approximates srtf because it's predicting the future of that task because the fast tasks tend to float to the top the short bursts i mean and the ones with longer bursts tend to go to the bottom and so this is a way of getting at srtf when we can't perfectly predict the future okay now this each queue has to have some scheduling done for it so if we did fix priority scheduling where the top one's the highest priority and then the next priority and the next one then um this is fine except uh you could imagine starvation happening pretty easily here because if we keep having short tasks you might never get the long ones to run okay now um the question here of does this mean that the long tasks down at the bottom have less contact switching uh because they end up in a fifo queue so yes they have less contact switching amongst themselves but there's still the context switching of the queues above okay so the long-running tasks are still going to get somewhat interrupted when the short one running ones run okay now um the problem with fixed priority scheduling is you can imagine the starvation issue another idea is that each queue gets a certain amount of cpu time starting from the top with a large amount and down to the bottom which has lowest so you can maybe have 70 percent of the cpu to the top ones 20 to the next 10 to the next okay now if you're starting to get a little nervous about all of the heuristics here heuristics being how many cues what are the quanta uh what fractions go of cpu go to each queue uh you're right to be uh a little skeptical about that in fact there there were schedulers i'll talk about one a little bit later in the lecture that were set up along this lines and it turns out that the heuristics start getting so complicated that nobody really knows how they work or why so that's a that's a danger right so one other thing that's interesting here is this particular scheduling scheme is subject to a counter measure by users okay so the counter measure would be something that a user could do that's going to foil the intent of the os designer so for instance in a multi-level feedback scenario you put in a whole bunch of meaningless io just to keep the jobs priority high and of course if everybody did this then none of the scheme doesn't work right and there's a famous example of this back in the early days of computers playing computers there was an uh othello contest where everybody brought their othello playing games um othello is a a board game for those that you're not familiar with it it's not just a shakespearean character and uh you play against the competitor and so the key was you wanted as much cpu as you could get and so at one point the winning team found out if they just put a whole bunch of printfs in a tight loop they could get scheduled more often and have a lot more cpu time all right so there's an example of a malicious program exploiting the underlying scheduler all right now there is a real case of a schedule like this by the way i will uh say that there are many schedulers like this in the world sun os uh uh was notorious for having a very complex one of these linux had something called the o1 scheduler okay and it actually had 140 priorities okay if you look here the first hundred of them from zero to 99 were considered real time priorities and those are the highest by the way is zero the lowest is on the right and then the user tasks had another 40 priorities which were changed by the the nice command okay so 40 for user tasks 100 for real time or kernel tasks the lower um priority value here of zero was higher priority and highest was um and the higher priority value was lower priority so i realized that's confusing but um zero is is a high priority and the key thing about that made this 01 was it didn't matter how many tasks there were in the system the computing that the scheduler did was always 01. so that seems like that ought to be a good thing so you know you can imagine we were talking about machine learning earlier you could imagine that the more tasks you got the more machine learning you were doing and maybe things wouldn't scale uh you know constant time but rather might scale as the number of threads or something like that and so you'd get very bad behavior as you added threads so the great thing about the o1 scheduler was all of the internal scheduling data structures and so on were o1 so that seems like a good thing okay and time slices that means quanta priorities interactivity credits are all computed when the job finishes the time slice i'll say a little bit about what that means in a more moment but um you could imagine if i've got 40 possible user tasks we'll ignore the real-time ones for a moment user task priorities then i might want to try to deal with interactivity by moving things that had short bursts to higher priority temporarily as long as they had short bursts so that's where the heuristics start coming into play okay and the way that this ended up being o1 was there's two completely separate priority cues for the ready queue one called active and one called expired and all tasks in the active queue would run until their time slice expired and then they'd get placed on the expired queue and you'd go through and everybody would get to run and then you'd swap them okay and so it ended up being 01 as a result and the time slice depend on priority linearly mapped so things with higher priority got to run longer than things with lower priority okay so this is very similar to a multi-level queue in fact it is a multi-level queue kind of in disguise here because every we have 140 levels here okay and the decision about how you move something back and forth between queues is where the heuristics come into play okay now here's another look at the o1 scheduler basically you have the expired and the active queue with a bunch of priorities the priorities basically you run each task on the highest priority and then when you're done with it you swap it over to the expired queue so in other words when the quanta now when you're done with the task excuse me when the quantity expired you flip it over you keep going until there's nothing left and then you swap the two and the thing that made this complicated was not what i just described to you and made it complicated was all of the heuristics to boost the priority of i o bound tasks up and down or to boost the priority of starved tasks uh from the low priorities up in order to make sure that somehow all users of the scheduler were happy okay so heuristics would take every process or thread and make a decision about move it down in priority up in priority based on its past behavior and these heuristics were very complicated okay so the heuristics are interesting to at least talk about so the user task priority got adjusted plus or minus five based on heuristics uh involving how long it's been sleeping versus how long it's been running and the higher sleep average here meant it was a more i o bound task you got more reward you got to raise your priority there's something called an interactive credit which was earned when the task sleeps for a very long time um and suspended when the task ran for a very long time and the interactive credit provided some hysteresis to avoid changing the uh priorities too frequently and um things that are interactive got some special uh dispensation so if it really figured out something was interactive then it would even not do that run to the first quanta and switch over to the expired but it would get you you get a chance to run for a little bit and switch over hopefully you're starting to see that this is complicated right the cool the clean thing was the real time tasks so those 100 tasks in the middle uh or excuse me on the high end were always uh run at their priorities um they always preempted the non-real-time tasks there's no dynamic adjustment and um and some very well-defined schemes so either a fifo where you ran to completion or round robin where you ran with a fixed quanta uh to completion so the real-time priorities were nice and clean and predictable but it was a strict priority scheduler the heuristics were complicated okay so uh i will oops sorry i will tell you the uh the end of the story here is basically this got so complicated that um a bunch of maintainers of linux basically decided that uh they were tired of it because the heuristics got too complicated for anybody to understand their exact behavior and eventually linus and a few others basically threw out o1 and came in with cfs which we'll talk about for the later part of the lecture but uh it's interesting to note the dilemma that uh that a scheduler designer is in so if you're the core developer of some operating system that's used by a whole bunch of people and they have relied on the behavior of your scheduler and its heuristics and however somebody isn't quite happy so you need to change something you don't want to change the heuristics too much because now everybody else is going to be unhappy and so you start making little tweaks and you get this complicated decision tree if this and that and that change this by a little bit and then make this decision and things rapidly get out of hand and at one point in the 2.6 kernel they just gave up and threw up their hands and tossed out the o1 scheduler even though the scheduler itself is extremely efficient as number of tasks grow it's just too complicated to understand and it starts doing weird things that nobody knows why and it's not easy to make it work well okay questions so the end of this story by the way is that 01 doesn't exist anymore well it exists but nobody uses it okay so administrivia so we're still grading midterm one i think it was a pretty reasonable difficulty it might have been a little bit on the hard side um but uh we'll we'll know more when we get things up we had some people that had some issues with the zoom recordings um so we'll probably look extra carefully at people that missed recordings but may give a pass for not having them this time but you might want to practice getting the zoom portion of that set up just so that it works smoothly with midterm too it seemed like when we finally settled on the uh the actual zoom proctoring that we did that uh it people mostly were okay with it and it mostly worked so that was a good thing um there was a little bit of a discussion and i just wanted to say something to make sure everybody knows this but yes we are allowed to zoom proctor midterms uh as well as finals so the cs department uh cs half of the department excuse me was actually given permission to proctor midterms for select courses in addition to finals and uh so cs 162 is authorized we requested and we were authorized to do that just so if you have other folks that are still wondering about that we do have that authorization and i think it worked pretty well i know that people got a little nervous about it um you know don't be nervous if everything worked out you'll be fine um but you know i'm hoping it gave people a little bit of a sense that um you know that they could just do the exam normally without considering that cheating was a requirement you should let us know we're actually going to put out a survey soon just to know how people think the class is going and um you know midterms and projects and everything because this is a very hard term obviously being all virtual um we're hoping uh so the bins are going to pop up before the the uh grading is done we haven't i haven't looked at any of the grades yet the bins are essentially uh slight tweaks off of what was in the summer but i um hadn't gotten them up yet so i i apologize for that uh but um we are we really are setting the bins and independent of the grading um so uh so the problem with noise cancelling headphones is really the the issue with uh not knowing who's listening to what so there's a little bit of a challenge on that um we will uh try to figure out things about that um as we go all right um and um there may be some way that we could handle that but for now no headphones but maybe we can work something out let's uh let's put that on the list to figure out okay all right um let's see uh sco yeah the question about when we'll be done grading we hope to be done certainly later this week as you can imagine things get a little trickier in this format for grading and so on so um we will we're working on it and as soon as we can we'll get them out we won't make you wait too long i promise so um now back to non-midterm stuff so group evaluations are coming out for project one soon later because project one is almost done uh and the way this works is we uh you get to evaluate your partners for how well they're interacting in uh in your group okay and so every one of you gets to uh get 20 points for every other partner not yourself and you get to distribute it to your partners uh in any way you want so if there's a four-person group that means you get 60 points because there's three other partners and you get to distribute it to the other partners okay no points to yourself and this is one of many evaluation techniques this is not the only one but this is one of many evaluation techniques that we use to understand kind of how partners are working with each other the other one of course is what your tas understand about your project dynamics or what you've talked to any of us about them okay but in principle if a partner really isn't participating at all in the extreme cases almost never happens but it can all of the missing partners points could be redistributed to their partners if that other partner's not doing anything okay you could think of this as almost a zero-sum game in that point and the reason we do this and we've done this in 162 forever is that um really this is a project course and you're supposed to be working with your partners and relying on each other okay and so this is a way of us understanding how you're doing and one of the reasons i'm bringing this up is there are a couple of folks in the class that have essentially dropped off the earth i think they were kidnapped by aliens i'm not entirely sure but uh if you're one of those people and you're hearing this broadcast out on mars please come back and start working with your group again okay respond to email respond to your tas uh respond to your other partners okay all right come back from mars now um you might want to make sure that your ta understands any group issues you might be having i'm happy to meet with groups that want to do a bit of fine tuning on their interactions but let's figure this out now that project one is essentially done how to uh get a a fine-tuned happy group and uh to that uh uh to that uh aspect we're going to start with the group coffee hours i promised at the beginning of the term and uh one of the tas actually is going to be uh posting how to do this a little bit later in the week but the idea is uh you can get extra credit points for screenshots of you and your team with cameras turned on interacting and holding up your favorite uh beverage of choice um and uh this is just you know it's it's a gimmick but on the other hand it's a reminder that you ought to be interacting with your group with your cameras turned on just to get things working if you're dealing with extreme kind of group issues you know it starts with actually seeing the other members and talking to them okay texting tweeting uh slack pick your favorite uh your favorite communication technology that doesn't involve video these are all fine and they have their place but they can't be the exclusive way that you interact because things are just not gonna go well um all right and look if we were in real life uh instead of virtually you would be meeting with your team all the time so um let's see if we can get the groups working well again okay uh you don't have to be holding a beverage you could be pretending to hold a beverage if you like um you know glass of water works cup of coffee whatever all right okay and don't forget to turn the camera on for discussion sessions okay all righty now i think that's all i wanted to say administration-wise so we'll get the final grades of the exam out um i think things went fairly smoothly so that's good okay so does the os schedule processes or threads uh well many textbooks use the old model which is one thread per process as we've already talked about [Music] oh and by the way i look if you really can't use a camera for some reason talk to us but i think um you you can okay that uh with us but i would really like you guys to to try to interact in whatever way works so does the os schedule processes or threads so many textbooks as i said use the old model one thread per process all right and this was the case for decades and then um threads the advantages of threads started becoming obvious so you want a single protection domain with lots of concurrency in it the only way to do that is many threads per process and the ways that this started was it started with user level threads being scheduled on top of a single kernel thread and then that got moved into the kernel to some extent which is where we are now with things like linux okay so usually the scheduling is on a per thread basis not a per process basis the only way we might the only reason we might think about processes is really if we were interested in understanding some sort of fairness which said that each process gets a fraction of the cpu and then we divide it up per thread that's a policy but the way that would actually be implemented today is you would you know you divide the cpu up per threads based on that policy and then the threads of the things that are scheduled because the threads are the things that are being switched out inside the kernel okay so one point to notice is that switching threads versus switching processes does incur slightly different costs so if you can really know that you're switching from thread a to thread b in the same process uh the overhead is lower because the um in switching threads you really only have to save and restore registers whereas in switching processes you're actually changing the active address space as well which can get a little expensive and certainly disrupts caching okay and i think i showed you that there can be a factor of 40 difference in linux for these two things okay now um i will toss out there just to tie together the beginning of the class which is that simultaneous multi-threading or hyper threading is available on some cpus and remember that the idea there is that different threads are interleaved on a cycle by cycle basis on the same cpu okay and there that's got some magic that you would talk about if you took 152 for instance but the in those instances the different threads can have each have different pointers to their page tables which means they can each be in different processes and it would still switch them on a cycle by cycle basis so if you have hyper threading you might get really fast switching but in general if you're switching from one thread to another on a cpu and you have to switch the address space that's more expensive okay now what about multi-core or even multi-process where you have a bunch of multi-core chips that are tied together into a big shared memory machine okay so alg algorithmically sorry one moment here there's not a huge difference from single core scheduling except that there's a bunch of simultaneous things that can be running okay and so now you have the choice if i have a big pot of potential threads on my ready queue which group of them do i have running at the same time all right so it's helpful in some sense to have a per core scheduling data structure okay for among other things cash coherence so if each core typically has a first and a second level cash in today's processors and so if you have a thread that ran on core one and then was put to sleep and then went back to core one you're going to have some cache state that it can use whereas if you always schedule the thread on a different core you don't have the advantage of the cache okay and so there's something called a affinity scheduling which most good operating systems have which basically says that once this thread is scheduled on a cpu the os tries to reschedule it on the same cpu to reuse cache or reuse other cpu local storage um and uh resources like branch predictions another good one um but of course if there's uh 20 idle cores and one busy core there is going to be a point at which affinity scheduling is traded off against uh you know parallelism and probably the choice will be made to migrate the thread at some point but we have to start thinking about these issues okay and here's an interesting thing that we kind of brought up when i showed you uh test and set but i want to re-emphasize it which is remember the idea of a spin lock so this was the the thing not to do with test and set i told you okay that was the way you do an acquire of a lock is you run test and set uh on the address of the value uh until you eventually get back zero and the reason for that is the testing set if you remember grabs the value stores a one returns the value and if you set the value to zero it means the lock is free and even if you have a thousand threads that all simultaneously do test and set because it's an atomic operation only the one of them ever gets uh the zero and all the others get one and so that one that gets the zero gets to exit the while and now they're in the critical section and the way you release the lock is you set value to zero okay so spin lock doesn't put the calling thread to sleep it just busy waits which kuby said is a bad thing right busy waiting is bad well okay i'm going to tell you one instance where it may not be bad don't do this at home folks um and when is this preferable well that might be preferable if you've got a set of threads running simultaneously on the same task and they're waiting at a barrier for each of the threads to finish okay so let's give an explicit example there are 20 threads and what they're going to do is they're going to run in parallel for a while and then they're all going to wait until they're done before they continue just like a join and in that instance you want to have a spinning something like a test and set that's going to wait uh spinning until the last of the threads are done and then it releases quickly and the reason that can release quickly is because basically we don't have to reload this off of some uh ready queue or some weight queue reload all the registers out of the tcb and so on we don't even have to dive in the kernel necessarily so if we know that the set of threads that are spinning are all part of the same task then um this could be okay okay because it would wake up very quickly so every test and set is a right uh unfortunately so anyway so i want to i want to stall at that for a second so this could be preferable if you've got a multi-processor program with some simultaneously scheduled threads that are all spinning and waiting for each other because that in that instance it's okay to spin because you're um you you're all part of the same task okay um now this you got to be careful not to do this for too long because you'll end up wasting cycles if you do it incorrectly now how would you know they're waiting for each other so this is a question good the way the reason you'd know is because you've written a multi-processor program that you know has a barrier you know that every thread comes to a single point and waits for all the others to run to get to that point and then it continues and you ask the operating system to schedule you all simultaneously so all of the cores are all working on the same thing then this might be okay so if you're trying to optimize a parallel program for instance you might use spin locks okay and the limit of how many threads you could have would be the number of cores exactly you have to be very careful about doing this now back when i was building multiprocessors in a while ago we actually had a variant of this which was called too competitive what that meant was you'd spin until the time that you've wasted spinning is exactly equal to the time it would take to put you to sleep and at that point you'd go to sleep and so in the best case where you're only waiting very briefly people would spin a bit and then they'd exit if something screws up like interrupts happen or you don't have enough things scheduled then you would go to sleep after spinning for a while and this is too competitive called number two competitive because in the worst case you'd never waste more than twice what you'd waste having gone to sleep right away all right now of course the problem with this spin lock is actually test and set is a write if you think about it why is that a write operation well it's a read followed by a write and in cache coherence a write is a bad thing because it uh invalidates all the other copies and then gets a copy in your cache before it does the right and so if you've got every core is all doing well test and set then that poor lock is bouncing all over the place and you're and you're using up a whole bunch of uh memory bandwidth okay and so if you're really what you really want uh is test and test and set which we showed you this in lecture seven you can see where what you do is you say while value and you spin here and that's a read and so uh everybody gets the the ones into their caches and they're just spinning locally and then as soon as that goes to zero then you do test and set to grab it and so you're you're vastly speeding things up as a result okay so um now when multiple threads are working together like i just said then the only way that this works well is if uh they're all scheduled at the same time that's called gang scheduling and so there's a lot of gang scheduling operations that kernels offer which is making spin weighting more efficient okay because it's really inefficient to spin weight for a thread that's suspended and sleeping on the weight queue because you're now you're really wasting time okay and there's some alternatives where the os informs a parallel program how many processors its threads are scheduled on called scheduler activations and there the application adapts to the number of cores it's scheduled and so you get kind of the best of both worlds you only have as many threads as you have core is currently scheduled okay now let's talk about real-time schedule scheduling so what we've been talking up to now is about scheduling that either optimizes uh response time or it optimizes throughput right or maybe some sort of fairness which is some combination of them um real-time scheduling has a different goal it's far more important in real-time scheduling for predictability of performance so a typical real-time task might be something like the brakes on a car where you when the time from when you slam on the brake to when the brake pads uh start slowing you down there's a there's a limit to that we want to make sure it happens predictably and quickly otherwise maybe i slam on the brakes and i end up hitting something okay um so in real time scheduling our goals are different it's about predictability and meeting deadlines and here we need to predict with cons confidence for instance what's the worst case response time of the system not how to optimize response time see that's a different thing okay and so a real time system performance guarantees are often tasked or class centric and they're they're figured out in advance and i'll show you how we do that but the simple example that would be the time between when i slam my brakes on and when the brakes start working there's a deadline there that no matter what the scheduling of the system is we hope that that deadline's never exceeded okay so in contrast in a conventional system performance is uh system or throughput oriented um it's kind of a wait and see we'll we'll try to run everything we can at the best speed we can whereas real time is about enforcing predictability so hard real time which is uh for time critical safety oriented systems like breaks the idea there is you're going to meet all the deadlines if possible determine in advance if this is possible and there are some good schedulers we'll talk about one called earliest deadline first edf but there's also things like leaks laxity first rate monotonic scheduling deadline monotonic scheduling etc soft real time is like hard real time but softer and it's use for things like multimedia where we're going to try to meet all the deadlines so in the case of a video you're going to try to make sure that every frame comes up at the right time but if you miss a video frame it's not the end of the world okay whereas if you miss something in hard real time your car runs into a wall okay and so we're going to try to make meet the deadlines with high probability and this is something like a constant bandwidth server is a good example there okay so we're going to take a really brief break and we'll be right back here so let's let's see if we can define this real-time scheduling problem a little bit uh more uh succinctly here so in a typical real-time scenario tasks are preemptable they're independent and they have arbitrary arrival or release times tasks typically have deadlines and known computation times and here's an example setup okay so if you take a look here we have threads one two three and four and in this instance here let's look at thread one so there's a release or arrival time that's the up arrow there's some computation which is represented in gray here all right and then there's a deadline which is the point at which the real time schedule has to have completed the computation so although i show you here that all of the all of the computation happens right at the beginning it could be spread anywhere between when the task arrives and when the deadline is and that would be fine as long as it's completed by the deadline okay and t2 here sort of has an arrival much uh at this point that's earlier than t1 and a deadline that's later etc and the key thing uh in addition to those kind of key parameters when does it arrive what's the computation what's the deadline notice here that since we have overlapped computation this is not a possible scheduler result if we only have one core okay because notice that we would have to have multiple things executing at the same time in order for this to uh happen this way okay is everybody with me on this model okay questions um so if this doesn't work what could we do well we could try running around robin scheduler okay so notice by the way that t4 arrives first and then t3 and then t2 and then t1 and so if you notice here so t4 runs and we have some quanta that we come up with and um you know so t4 gets the first quanta and then there's nothing to run so it gets to run again but now t3 is there and so this might be a round round-robin schedule of that previous set of threads and what happens is we hit a point at which we haven't finished all the computation but the deadline shows up so in this scheduler uh instance here round robin doesn't work and your car runs into the wall so this seems unfortunate okay and what's the problem here well the problem is that round robin has no notion of deadlines i mean it wasn't designed for deadlines it was designed for multiplexing okay and so the requirements for a scheduler for deadlines it's fundamentally different from the requirements for uh multiplexing okay now one of the most common and you know i'll call it famous schedulers is called earliest deadline first and uh in this instance there's typically our threads are actually periodic so they have period p and computation c in each period okay and so um the idea will be that if we go back here in um our what we mean by periodic is that the thread will have an arrival that will happen over and over again the computation will always be the same and that arrival will be right at the deadline spot so you can imagine that we have another thread another thread another thread and they keep getting reintroduced regularly and the the parameters are the period for how often it is or how along the time between arrival and deadline and the computation and the trick is can we schedule this in a way such that we don't miss any deadlines okay and so every task has a priority based on how close the absolute deadline is kind of makes sense right so as uh if you take the set of threads that are currently ready to schedule on the on the uh ready queue and you say which one of these is closest to the deadline uh closest to its deadline what i'm going to do is i'm going to let it run okay so whoever is closest to its deadline is the one that gets to run so this is a type of priority okay um scheduling where the priority is based on closest uh pri based on proximity basically to the deadline that's why it's called earliest deadline first okay and so here's an instance where uh thread one basically has a period of four and a computation of one thread two has a period of five and a computation of two thread three has a period of seven and a computation of two and so if you notice thread one obviously is happening more frequently thread two is uh happening a little less frequently and thread three is the least frequent and now let's run this okay and let's assume that everybody arrives at time zero okay and if they all arrive at time zero then for instance four uh times uh periods later we know that thread one needs has its deadline and arrives again and so if we look here um from time zero which one has the closest deadline well clearly thread one's deadline is closest so we let thread one run okay and it runs its one computation okay and at that point it's deadlines done thread two is now the next closest deadline so it gets to run with its two units of computation and then last but not least thread three gets to run and it runs it's three pieces of computation now as i told you this is periodic so in fact after this arrival thread one will have a new arrival thread tool avenue arrival thread 3l avenue arrival and we can start looking at the scheduling i'm not going to go over this in detail but what we're doing here is we're saying that at any point in time the thread whose deadline is closest is the one that gets to run okay questions now what we would find here is that um assuming that we have been careful not to overload the system uh we will always meet deadlines okay and notice the requirement by the way is uh preemption has to be a possibility so if you were to run these for long enough you would find that eventually some of them get interrupted where they compute for a little while then something higher priority runs and then they compute afterwards as long as your tasks can be preempted then edf is the best way of handling this particular scheduling requirement okay and how do we know this well even edf won't work if you put too many tasks here right if you fill this up with so much uh computation that you're using more than 100 of one cpu then you're not going to be able to schedule it okay now the question about how do tasks submit their periodicity to the scheduler um they would they would actually say here's my thread and here's my periodicity and here's my compute computation they would actually input that okay so this is uh this is not just an idealistic scenario this is a this is a real scenario um the thing that you're probably wondering which is a a very good thing but let's assume you are how do i know what c is and if you were to go into the real time literature there's a lot of work that's been done on how do you compute the worst case time for a computation and that's what we're calling c here and so there's a lot of work in both having the compiler compute what c is plus building uh processors that are more predictable than regular ones you might imagine for instance that uh for instance the um you might imagine for instance that the cache actually gets in the way of predictability and so some people who are designing real-time processors actually completely disable the cache okay and this is that's right this is only caring about deadline not deadline minus computation time because by my problem statement as long as we get all the computation in before the deadline we're good okay now even edf won't work but it turns out edf is optimal in the following sense if you take the amount of computation divided by this should be period or whatever divided by the period and you sum all those up what you see is that that's less than one okay and let me just give you a very simple intuition of why that makes sense the idea here is that um if i take the fact that there's one unit every four one divided by four is i'm using up uh one quarter of the cpu you know two out of five is another 20 percent and so on um and so uh another 40 excuse me and so if i were to add up all those percentages and they came out to more than one then i would realize there's absolutely no way to uh schedule this okay and edf basically is optimal in that um you can use 100 of the cpu here if you ignore the switching overheads okay now how do we ensure progress so starvation is a situation where thread fails to make progress and starvation is not deadlocked so next time we're going to talk about deadlock because starvation is something that could resolve under the right circumstances where deadlocks are unresolvable but starvation still can be bad okay and there can be causes of starvation like the scheduling policy never runs a particular thread or threads wait for each other and are spinning in a way that will never be resolved okay but isn't a cyclic deadlock now by the way deadlock is a type of starvations not all starvations are deadlocks okay so let's see a little bit about uh what kind of starvations we could have so here's a straw man which is a non-work con serving scheduler so you have to know what work and serving means this is a scheduler that basically does not leave the cpu idle when there's work to do okay and so a non-work conserving scheduler could trivially lead to starvation if for instance it doesn't schedule something right okay maybe there's a bug in your scheduler but let's assume that um everything's worth conserving so here's a different one that is worth conserving but still could lead to uh starvation which is last come first serve so this is a lifo stack the idea is that the late arrivals are put on the top of the stack and they get first service the early ones uh end up waiting so this is extremely unfair and in the worst case if tasks keep arriving the original ones never run okay um all right that's when the arrival rate exceeds the service rate we'll talk more about queuing as we get later in the term but you know this is a cue that if it builds up faster than it drains then the things that were early on will never get to run now if we had fifo instead of lifo the queue can also build up but at least there we're servicing the oldest things first but you can still have a cue where things arrive too fast and you're not servicing them okay so what does it mean for the cpu to be idle so what it means for the cpu to be idle would be a situation in which it's not actually doing any useful users work instead it's spinning or it's uh you know basically in the idle thread and not not running things that are ready to run so that would be idle okay so we want to if things are schedulable they can run then we want to make sure we always run okay now what about first come first serve so we showed you this idea uh last lecture where what's happening is we have things are arriving that's these colored threads and then they get scheduled in the same order they came but notice that this red one is very long and so well it's running all these other ones are building up and then when it finishes the other ones get to go back in order fight for order and this leads to starvation because if a thread never yields it goes into an infinite loop or something then other tasks never run so this is the problem with all of the non-preemptive schedulers is it that if you have a buggy task or a non-uh social task let's say one that one that's being anti-social then um you basically get starvation and uh all of the early personal operating systems on personal computers had this problem so i mentioned that the the first lecture things like mac os and windows 31 etc had this problem okay so um what about round robin well the nice thing about round robin is that you always go through every task so each of the n processes get one over n of the cpu in the worst case and so with a quantum of length q milliseconds a processor uh a process waits at most n minus one times q to run again and so process can't be uh kept waiting indefinitely so this doesn't lead to starvation so it's fair in terms of waiting time not necessarily in terms of throughput because we're varying sizes of tasks um you know based on their requirements and so we don't necessarily guarantee everybody gets the same throughput okay but what about priority scheduling we also talked about that so if you require recall a priority scheduler always runs the thread with the highest priority so in this case on priority three has job one two and three and it's gonna run everything maybe around robin um job one two and three and then finally when that's done it'll go down to job two uh job four which is priority two and if one two three and four are gone then it'll get around to five six and seven so here's a case where we're clearly going to starve if we keep putting high priority tasks in there faster than uh they can finish then the low priority ones never get to run but there's a lot uh more serious problem even than starvation here okay called priority inversion where high priority threads might become starved by low priority threads under the wrong circumstances and you're about to start uh the next lab and project number two is basically going to start looking at scheduling and you're going to need to address the following problem so let's talk about priority inversion so here's a priority inversion situation where the low priority task job one acquires a lock okay now let's suppose that it acquired that lock and then jobs two and three showed up or suddenly became runnable or whatever the case may be there could be many reasons why job was run one was running with two and three suspended i o take your whatever it is but now in this scenario uh job one has the lock but two and three are higher priority and so now maybe job three starts running okay but now what happens if job three tries to acquire the lock all right so job three tries to acquire the lock held by job one and it can't and so now job three has to go to sleep and it's blocked on a choir so already just take a look at this picture here we have a scenario where the highest priority task in the system job three can't run and it's being blocked by job one at least okay so this is this is an inversion of priority which is uh problematic at best now if job two weren't in the picture the fact that job three is blocked means that job one might be able to complete uh running until it released the lock and then job three would wake up right away and we'd be good to go but the mere active job two being here is a problem okay because if job two is busy running it could run for a very long time and in that sense if job two runs for a very long time now uh job one doesn't run and as a result job three doesn't won and so you could doesn't run and so you could say that job two is actually holding up job three okay and there's a a priority inversion that may not resolve quickly because job two may run for a long time so this particular situation is one in which um the priorities that were designed by the designer of your task that's running here uh have been uh subverted by this priority inversion so um you know whatever was the reason for you putting job three at highest priority in job one at lowest is not happening right now because you know job two which is supposed to be in the middle of them is basically screwing this all up so um what can we do well clearly what we need to do is we need to somehow get job one to run long enough to release the lock so the job three can run okay all right so what do we do so the medium priority task is busy starving the high priority one anybody think of what we do okay signal well maybe you have a signal here but um that would require more programming that might not be what we want yep give the task one more priority okay good or priority donation all right and we'll show you how to deal with that but you know when else my hyper um my priority lead to starvation or live lock lots of cases where you might have a high priority task uh spinning waiting on a lock and a low priority one needs to release it and so this high priority one is running but it's not running successfully because uh it's been waiting so that's another type of inversion where the thing looks like it's running but it's not doing any real work and so yes priority donation so the trick here is a job three temporarily grants job one its priority so the job one gets to run at high priority long enough to release the lock okay so really what we're doing here is job one gets this temporary boost in priority long enough to release the lock okay and how did that happen well job three donated its priority to job one or sometimes this is called priority inheritance that's another term for this okay all right and at some time job one releases and at that point job one's priority goes back to low priority but the lock's been released so job three can run okay and this is the point at which we go forward now the question might be how does the scheduler know okay the scheduler knows because uh it's paying attention to this pro this donation that's going on now the question is why is job 2 running before job 3 if job 3 has higher priority well if you go back to this scenario here the problem is that job three can't run because it's sleeping on the acquirer for the lock so job three is not running it's sleeping and job two gets to run because it's runnable and job run one is runnable but it doesn't get to run because job two is higher priority okay hopefully that answered that question so this is a scenario where the reason job three isn't running is because it's actually tried to do an acquire and it went to sleep okay now you get to actually deal do priority donation in project number two okay now this is not a theoretical problem okay so um you may have all heard of the martian pathfinder rover so july 4th 1997 pathfinder uh rover landed on mars okay and um this was the first u.s mars landing since viking in 1976 it's the first rover what's very cool is you guys should all check this out is the way they delivered this rover to the the surface was um when the when the martian pathfinder uh spacecraft got into orbit it dumped um a whole bunch of balloons that were wrapped around the rover and so the rover was inside these this uh multi-balloon bubble thing and it actually fell to the surface and bounced until it stopped bouncing and then they deflated the balloons and the spacecraft made it and the rover made it there safely so that's pretty amusing but that's not part of our story today the story is that um once this thing started uh whoops once this thing started uh working it was great it was sending back pictures everything was great and then a few days into the mission multiple system resets occurred over and over again and the system would reboot randomly losing valuable time and progress and the problem was priority inversion so there's a low priority task that's grabbing uh that's um collecting data and it grabbed a lock as part of an ipc task and um and then what happened is uh the the high prior priority one thing just kept running there was a bunch of random stuff going on and priority two wasn't able to run because it was trying to grab the lock and so this was an actual scenario where the lock had to do with the buses and communication where since forward progress wasn't being made there was actually a watchdog timer that went off and kept rebooting the machine which was a good thing because it meant that it rebooted it into a safe state that then could be uh examined and patched and they were able to reproduce the problem after uh a number of weeks down on earth and then they sent up a patch and uh fixed it so the funny thing about this perhaps is the solution was priority donation that's uh easy right that's your project too the thing that is perhaps even more amusing or not for them uh was that they had turned priority donation off okay so they had actually the vxworks had priority donation they turned it off because they wanted to make sure things were fast and they were worried about the performance implications of priority donation and as a result they uh they ended up with a priority inversion that basically broke stuff so there's your there's your story for the night okay now um i think actually up on the resources page i have an analysis by one of the engineers that talks about this particular um priority inversion problem it's a real thing so now um are the srtf or multi-level feedback cues prone to starvation yeah well in srtf obviously long jobs are starved in favor of short ones mlfq is an approximation to srtf so it suffers from the same problem and so yeah so we can get starvation out of this just by having a lot of short bursty tasks running priorities seem like they're at the root of all these problems because even in this instance we have cues that are higher priority than others okay and so we're always preferring to give the cpu to a prioritized job and non-prioritized jobs may never get to run but priorities uh we're kind of a means to an end here our end goal was to serve a mix of cpu bound i o bound and interactive jobs well you know give the i o bound ones enough cpu to issue their next operation and wait give the interactive ones enough cpu to respond to input and weight and let the long running ones grind away uh on all the rest of the cpu so priorities were really a means to get at the kind of scheduling we wanted and if you remember you know this is kind of we're living in a changing landscape here right this is the bell's law curve of computers per person you know and back in the day in you know 60s and what have you you know there might be one computer and a million people and now we might have thousands of computers per person and so we're in a very different landscape and so the question might even be are yesterday's scheduler is the right thing so priority-based scheduling was rooted in time sharing allocating precious limited resources to a diverse workload 80s brought personal computers workstations servers etc different machines of different types for different purposes and it's a shift to fairness and avoiding extremes like starvation rather than maximal use of precious resources instead we want to use resources in a way that meets our requirements okay and so that's a little different and with the emergence of the web you know the data center is the computer personal computers i mean you guys are all walking around with a cell phone that's extremely powerful it's all about predictability now okay and so does prioritizing some jobs starve those that aren't prioritized that's a question all right and if you give me a few more minutes before we end up here i realize i'm running a tiny bit late but proportional share scheduling is an idea where we're going to hand out portions of the cpu okay so the policies we've studied so far is always prefer to give the cpu to a prioritized job non-prioritized ones never get to run instead we could share the cpu proportionally give each job a share of the cpu according to its priority so that low priority jobs get a little bit less of the cpu than high priority jobs but everybody gets to run and if you recall from last time we talked about lottery scheduling where every job got some number of lottery tickets and then what would happen is to whenever we wanted to schedule the next task is we'd draw a lottery ticket and the winning job whose lottery ticket we drew was the one that got to run okay so for instance in this scenario with a a yellow red and blue jobs um the red ones get 50 of the cpu 30 for the blue ones and 20 for the for the yellow ones and this is a way of providing a fair queuing style of cpu distribution now we talked about the lottery scheduling last time so i'm not going to go through this in great detail but there is a certain unfairness that comes from randomness in this okay and the problem is that we're picking these uh tickets and it takes a longer job before two tasks that are have equal number of tickets really get an equal number of the cpu so as cool as the lottery ticket idea is it's still got this unfairness point okay and so we could do something similar but different which is achieve proportional share scheduling without resorting to randomness and overcoming this law of small numbers problem we have here which is by using something called stride scheduling so the stride of each job is if we take a big number w of some sort divided by the number of tickets that's going to be our stride so for instance here if w is 10 000 um a has 100 tickets b has 50 c has 250 then the strides are 100 240 and every job kind of has a a pass about how long of its stride is and the scheduler picks the job with the lowest pass runs it and then adds a stride to its pass and so what you see is because we're picking the job with the lowest pass number then um then things that have small strides get to run more because they're they're advancing less with each run and that are the things that have a lot of tickets in it so this is called stride scheduling because you're adjusting the stride of how far you walk and low stride jobs which have lots of tickets get to run more often and they get a bigger proportion of the cpu now um it gets a little messy when you worry about wrap around and all that sort of stuff so the linux completely fair scheduler is an example of this kind of fair queuing that is in common use today okay so n threads is a simple first example simultaneously execute on one end of the cpu so what we imagine is if we had one cpu that we could somehow divide it up into uh n pieces and evenly give it to each one of the threads if we could do that then we would be able to run um each thread would get exactly one end of the cpu okay now you can't do this in real hardware so the os has to somehow give out cpu and time slices and so what happens is we're going to track cpu time to a thread so far and this we're going to repair the illusion that we have a perfectly split up cpu like this and so in this instance t1 got to run for a little longer than its time and so and t3 ran for exactly its time and t2 is now short so what we're going to do is we're going to run t2 for a little while until we catch up and if we keep making a scheduling decision that um lets the one that hasn't gone enough go then we are going to get the illusion of completely uh fair chunk of the cpu okay and this is this is very related to the stride scheduling i just mentioned okay now in addition to fairness we want low response time so there's this idea of the target latency which is a period of time over which every process gets to run so if the target latency is 20 milliseconds you've got four processes then every process gets five milliseconds time slice um the problem with that of course is if you have 20 milliseconds with 200 processes we've got a very small time slice so in fact what we're going to do is have a throughput goal which is a minimum time slice so for instance if our target latency is 20 milliseconds our minimum granularity it's a millisecond we have 200 processes then we um we lose our fair cueing and we go back to one millisecond time slice by the way this is my last exam the cfs is my last topic for tonight i just wanted to give you this um the other thing that you've probably all learned about is nice commands so the operating systems in the 60s and 70s gave you the ability to take a a task that was running and give it a nice value where zero being nice of zero is you got to run like everybody else if your nice values were higher than that then you ran a little slower or nicer and if they were lower than that then you got to run with more cpu okay and i'll go over this again next time but if you want now to get proportional share out to cfs there's a way of basically coming up with a weight okay and um i uh we're running enough out of time that i don't want to go into this in detail now i will next time but what i want to show you before we leave here is this idea of virtual time so here we have uh one task has a weight that's four times that of the other one what that means is that every thread has the virtual time of how much it's run and so thread b when it runs it um it doesn't register as much virtual time per physical time as a and so then if we just keep picking the thread with the lowest virtual time we'll actually uh basically give b four times as much cpu as a okay and so this is a real scheduler that's actually used in linux and um you know you're probably using it now so we need to finish up now so the way you choose the right scheduler if you care about throughput you might do first come first serve if you care about average response time you might have some srtf approximation if you care about i o throughput you must might have some other srtf approximation fairness you might use the linuxcs cfs i just told you fairness with uh wait time to the cpu you might do round robin if you're worried about meeting deadlines you might do edf if you're worried about favoring important tasks like on the martian rover you might use priority okay so how does the linux uh real-time kernels affect the scheduler so what happens there is the real-time kernels uh basically give you the ability to schedule something in real time they might give you edf or they might give you others where the deadline is an option okay um so that's uh and the real time priorities i showed you earlier is a strict priority scheduler which you can do to use to use um to do real-time scheduling as well all right so when do the details of the policy matter when there aren't enough resources um when should you buy a faster computer when your response time is getting too high okay so you might think you should buy a faster x when x is utilized at 100 percent um perhaps we'll talk more about that next time so i'm going to end now since we're way over time but i hope you guys have a great uh rest of your night and we will see you on wednesday we'll pick this up where we left off and i'll say a little bit more about the cfs scheduler since we were rushed a little bit on that but i hope you have a great evening and we'll get the uh graded exams back to you as soon as we can good night |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_18_General_IO_Cont_Storage_Devices_Performance.txt | hey everybody welcome back to uh 162. we are going to continue with our discussion of i o and uh to that end one of the things we were talking a lot about was this idea of how a cpu which is of course running the operating systems and programs talks to a device and we said well uh there are various buses in the system and we talked particularly about the notion of a device controller and the device controller is a piece of hardware that connects directly to the device and also interfaces with the various buses these controllers can be on pci buses pci express buses they can be usb etc and notice that um in addition to receiving commands over the bus it also has the ability to basically talk to the cpu and give events through interrupt interrupts and that's where the interrupt controller connection is and so basically all of the communication with the device is pretty much between the cpu and the device controller and we um specifically as we were getting toward the end of the lecture last time we're talking about a couple of different ways the cpu can communicate with a device and one was via special instructions um those special instructions being things like nb and outb or in w and out w for byte or word and those special port io instructions go to a special port address space which is different from the regular address space and mostly with the intel processors this is a backward compatibility thing from the original ibm pcs but what i show here as an example are some ports where port 20 21 22 23 hex those all represent registers inside the device controller that can control the device the other thing we talked about was memory mapped io in which certain addresses within the device are actually mapped into physical address space and as a result by doing reads and writes the cpu is able to affect changes on the device and this was an example that i gave here where maybe we're controlling a screen and this would be local physical addresses uh and those physical addresses like ox8002000 ox8001000 etc represent points at which if the processor were to to write to those addresses obviously they have to be mapped in a page table and then uh then the rights can go through will actually cause things to happen okay and uh for instance writing to display memory here might actually cause dots to appear on the screen or writing to the graphics descriptor queue might allow us to assemble various items in that descriptor queue that are effectively triangles for some interesting three-dimensional game or whatever and then if we write to command or uh register we can say okay render that in three dimensions or maybe we can read from uh ox zero zero zero seven f zero zero zero to get the status of the device right and because these are in the physical address space we can protect this with address translation and under most circumstances perhaps you only give the kernel access to these addresses but you could potentially give it to a process whose job it was was to control that device just by an address mapping all right um and one of the things we did talk about a bit last time as i recall was the fact that with modern buses such as pci and usb and so on there's an automatic negotiation that happens for the actual absolute values of these addresses just to make sure that the physical addresses of the devices don't overlap all right are there any questions on that before i move forward so we call that memory mapped io okay good so um so transferring data to and from the controller as i mentioned there's a couple we can either use ports or memory mapping but there's another uh axis that we can consider one is programmed i o and programmed i o is where every byte gets transferred by the processor so the processor goes in a tight loop and it you know it reads a byte and then reads the next one reads the next one reads the next one reads them one byte at a time or one word at a time and stores it in memory and as you can imagine that's expensive because the processor or core is doing that so the pros of this is it's very simple very easy to program and there are some low bandwidth devices that actually interact that way we showed you the speaker last time getting uh programming the speaker to get some interesting tones out of it but if you really want to do a lot of transferring of data the other option is something called direct memory access which is a situation where you tell the controller go ahead and transfer data to or from dram and tell me when you're done and then the controller can go ahead and do all of those transfers on its own so in particular here's an example where the cpu is going to try to do some i o from one of these disks so the first step the cpu is going to talk to the device driver in the kernel and say oh transfer this for me into some buffer in memory and what will happen is uh the device driver will go ahead and program under some circumstances a dma controller and that dma controller will then reach out to the controllers for the disk and that disk will individually transfer bytes back to the controller to the dma controller and then the dma controller will write these through to memory and then when all is said and done the dma controller will finally cause an interrupt of some sort and that that's the point at which the cpu comes over so you could kind of think of a dma controller as a part of the system that basically acts like a cpu for the tab for the action of actually transferring data now these dma controllers can either be a separate item on the bus that acts like a processor and does the transfer or it could actually be integrated and a lot of controllers have this these days where the dma controller aspect is integrated for instance in pci and um things that are on the pci bus are able or pci express are actually able to go ahead and transfer over the bus directly to memory okay and one other thing i'll point out is of course if you're writing directly to memory it's quite possible that you're going to be writing memory that is cash in the cpu and so that's an issue where we have to be very careful because we don't want the cpus version of the cache to get outdated relative to memory and there are at least two options there i didn't put these on the slide i probably should have one is where the device driver basically flushes uh the block entirely out of the cache before it does the dma the second is there is dma hardware in a lot of devices that can simultaneously write to dram while invalidating the cache and that's another way to make sure this stays coherent okay questions so direct memory access is an important way to get really high bandwidth communication between devices and memory and it leaves the processor out of the picture okay so how do we find out that we're done or that for instance the device is needs some service of some sort so you know examples where the operating system needs to know is when the device is completed a dma operation or if we ask the disk to do a write we need to know when that's done the other is when maybe there was an error encountered okay um and so the simple thing to do is for the device to actually generate an interrupt and um we talked about a lot about interrupt controllers kind of in the first several lectures of the class so uh mostly we were talking about timer interrupts but the device interrupt is similar it goes through the uh interrupt controller and uh causes a dispatch to an interrupt handler that would handle that particular interrupt and uh if it's a disk for instance maybe the disk generates an interrupt when it's done transferring and at that point the operating system wakes up in an interrupt handler and perhaps it finds a process that's busy waiting for the interrupt to happen or for the device transfer to happen and it wakes that process up and puts it back on the ready cube so the pros of interrupts are it can handle really unpred it can handle unpredictable events really well because uh you know you don't have any overhead until it's time for the interrupt so that's great the downside is that an interrupt of course is a transfer into the kernel and you have to save you change the stack you have to save a bunch of stuff on the stack there's a bunch of other overhead there and so interrupts can be rather relatively high overhead and if you have something that's generating lots of interrupts on a regular basis uh that could be expensive and perhaps an interrupt isn't the right thing at that point the alternative something called polling and the idea behind polling is periodically the operating system just looks at a register in the device maybe by using uh i o instructions like we talked about or by reading from memory mapped i o from some register in the device controller and uh periodically checks this and when there's a bit set in some register it knows that the uh transfer is done and it can continue continue so the idea behind interrupts versus polling these are duals of each other there are different ways of getting information out of a device you can imagine the downside of polling of course is that if the device isn't ready you've just wasted time looking at the device and so um so the pros of this is it's really low overhead because you don't actually have to save and restore a bunch of registers you're just checking a register out on the device the con is you can waste cycles if a device is infrequently ready and so actual devices actually combine both polling and interrupts a great example of this is a really high bandwidth network that let's say a 10 gigabit or 40 gigabit or even 100 gigabit per second networks these days if you had an interrupt on every packet you'd be in trouble however as soon as an interrupt occurs you enter the the network uh driver portion and what it does is it pulls all of the packets out including the one that originally called the interrupt but all the remaining ones as well so that's a form of polling and then it re-enables interrupts when it's done so it it uh takes the first interrupt it pulls to pull all the packets out and then it continues and this is how you can basically allow something as high bandwidth as say 100 gigabit per second network to actually not overload a processor okay great so now um let's take a look a little bit more we've seen this picture earlier in the term but if you if you look at the typical kernel like linux or whatever pintas you'll see that there's the system call interface which is the uh dividing line between user code above and the kernel below so the kernel is all in blue and there's a bunch of different uh facilities inside the kernel we talked a lot about process management and memory management uh in previous parts of the term and we're actually going to be talking further about file systems that's our next topic starting next week and then we have uh other devices networking we talked a bit about but we'll we'll also talk more about that in the coming weeks there's a question here does the device or the os decide whether to do polling or interrupts it turns out that's a good question it turns out that um the device typically provides both as an option and whether you're polling or interrupts is really doing interrupts is really a question of whether interrupts are enabled or not and so the operating system can make that decision they can decide to always disable interrupts and only poll or leave the interrupt enabled until the first one occurs and of course the first thing that happens on an interrupt is it disables everything and the colonel could choose to keep it disabled for a while while it's polling et cetera so that's a that's purely the um act of the os to make a decision about whether to do interrupts or polling so if you look at this uh picture that we've got here there's kind of the um yes you can selectively disable interrupts as well that's a good question the uh if you take a look at the interrupt controller there's typically a mask that lets you decide which interrupts are enabled and which are disabled if you look at this figure you see that the top half of this figure is got a standard interface which is that open close read write interface where sort of everything looks like a file in linux but then there's a bunch of interesting things below the covers and we need to talk more about that as we go on um and how we that allows us to basically get uh the standardized interface above okay so um our next topic is going to talk about some i o devices and specifically we're going to talk about ones that can serve as uh storage devices okay now if you remember the idea behind a device driver which is going to be something in the lower portion of the kernel here is that the device driver basically has that device specific code in the kernel that interacts directly with the device hardware through the device controller which we've talked about now and it supports that standard internal interface up to uh the higher levels of the kernel and and that's important because it makes the higher levels of the kernel much simpler and if you remember we had this discussion uh early in the term the device driver typically divided into a top and a bottom half and the top half is accessed in the call path from system calls down to uh making a decision about whether the device itself needs to be acted upon and so the top half implements things like open close read write the i octal system call something called strategy which is a a routine that starts communication with the device itself and um really what what makes it into the top half is typically a process that's trying to do some sort of i o will work its way into the top half to do the i o and potentially things get to sleep there if the device has to be invoked the bottom half runs as an interrupt routine and it gets its interrupts uh from the device and makes a decision what to do next okay and i showed you this figure this should look familiar now so above the system call interface which is the user program portion we might make a decision to do a request how would we do that we might do a read or a write system call okay and that goes across the boundary and at that point we might say well can we already satisfy the request so what might be a situation where we could already satisfy the request without ever talking to the device does anybody have any ideas there okay cash good so that it's a specific type of cash it's caching the uh the device contents okay and that's what's called the block cache we haven't actually talked about that one yet but we'll get to it and so yes now if you remember this interface for reads and writes is a byte-oriented interface right so you can read five bytes from a file but the blocks as we're going to talk about in a moment underneath from the say the disk are all four k bytes at a time and so we need to place to put the blocks that we've only partially read and that'll be the block cache and so it could be that we can already handle stuff from the cache otherwise we're going to send uh the request to the device driver and the device driver is going to figure out what needs to be invoked and it's potentially going to put the process to sleep on a weight cue associated with say the disc and then it's going to invoke the scheduler to wake up something that's already on the ready queue and of course at that point it will have something else running while we're doing the i o okay and so that's the top half of the device driver and it's gonna send uh commands and invoke the strategy routine uh to send stuff to the device hardware at which point um the hardware is just going to do its thing okay so the controller of a disk drive for instance will start uh the heads moving and at some point the the operation will complete and uh it will then uh generate a completion interrupt the bottom half will receive the interrupt it'll figure out who needed that data and it will wake it up transfer it into the user's buffers and recomplete and at that point we've gone the full gamut from the original request to the response okay so hopefully that's familiar to anybody do we have any questions okay and by the way that decision between polling versus interrupts can happen partially in this top half of the device driver so this top half of the device driver could decide to disable interrupts and start polling the device and asking it for data in which case we wouldn't go to the bottom half we would be kind of working between the top half and the and the device itself the other thing is when that if the device is giving an unsolicited interrupt because say it's a network card and there's a network packet coming in then we would come into the bottom half and at that point there might be a decision made to um to start polling okay and if you notice in the network case what's interesting there is you don't have a process that's requested anything instead you have a an unsolicited packet coming in and so the bottom half of the network device has to do a demultiplexing where it figures out kind of which socket a packet is headed for okay so that's a topic for another another lecture so is the device driver part of the device or the operating system so the device driver is definitely part of the operating system um devices however have specific requirements and so a device driver comes with it with a device and but it's uh unique based on the operating system so the device driver for windows is going to look a little bit different than the one from you know linux or apple ios and mostly that's because of its interface with the upper levels of the kernel much of the lower level logic is going to be the same but it's definitely part of the operating system okay and the bottom half is not the same as the device controller okay so this is all software i'm showing you on this screen so i know this is a this is a software class but in this instance you need to keep track of the hardware itself is actually the device controller plus say the disk and the bottom top and bottom half is actually uh the the software in the operating system that interacts with the hardware all right great so the goals of the i o subsystem are to provide a uni uniform interfaces despite wide ranging different devices so as we already talked about the fact that we can f open slash dev something you guys should all look at slash dev sometimes uh that things that are in the the dev uh sub directory actually are devices and you can go ahead and do this for loop reading something directly out of say the keyboard by going to the right slash dev file and that interface would work the same if you were talking to a keyboard if you were talking to a network or if you were talking other things and so that's the that's the standardized interface that we're looking for okay and it's that device driver the fact that the device driver provides standardized interfaces facing up really allows us to do that and we're going to try to get a flavor for what's involved in actually controlling devices as we go through but we can only scratch the surface here so first of all the there are several different types of devices and they're loosely divided up into three categories here so the first category of block devices like disk drives tape drives dvds and these are devices which present blocks of data to the uh operating system okay and that's because the underlying device itself is block based so if you look inside a disk drive what you'll see is a bunch of platters we'll talk about that in a moment and each platter has a set of sectors which are combined together into blocks and you can't read a byte off of the disk you have to read a chunk off the disk okay and so that's a block device and character devices on the other hand are fundamentally bite oriented and so you can get a byte out of a keyboard or a mouse or serial ports et cetera some of the usb devices and so the block devices yes you've got open read write seek but when you're fundamentally pulling from the raw device interfaces you're going to get a whole block at a time character devices uh there are things like get and put which lets you get single characters okay now raw interfaces are not the ones we're used to we're really used to for instance on the block devices you're typically going through a file system the file system goes the the additional mile of making sure that even though the devices have blocks uh that you could read three bytes from a file and that's going to be something above the block device interface and in fact if i go back here for a second let's just do that to my little green figure if you notice on this in file systems we got the block devices down here the file system which we're going to talk about is one of our next topics takes these blocks which are scattered all over the disk potentially and reassembles them into what i'll call bags of bytes that you can then read and write which is what we think of as files and what we think of as living in a namespace uh for files okay and so that's going to be the file system these other devices which are fundamentally serial those things have a pretty direct interface up because they're already byte oriented like the interface that is provided above the system call interface all right now so the last type is the network device now you might think that networks ought to be either block or character devices but it turns out they're treated as a as a separate type of device mostly because of the way they work okay and the way they work is they have sockets which receive things off of networks and then those sockets uh there's a like we mentioned earlier there is a unsolicited packets come in and get resorted into sockets and so on and so those interfaces are a little different from both block and character devices and so these network devices like ethernet and wireless and bluetooth and you name your favorite communication protocol basically are considered network devices and they're pretty much interacted with as fifos or pipes or streams of a bytes okay or if you think of them in terms of of mailboxes or or packets those packets are not of fixed size whereas with the block devices those packets are always you know say 4k or something like that okay all right so how does the user deal with timing uh down uh from the from the kernel excuse me from above the system call interface well up till now you've pretty much been dealing with the blocking interface which means that if i go to do a read uh what happens is the read system call waits until the data is back okay and it basically this process is put to sleep until the data is ready and in the case of a write this doesn't happen as often but if there's not enough buffer space or whatever uh it'll put the process to sleep until it can officially do a write okay so that is um what you're used to the blocking interface and that's what i also talked about when we just talked about that that diagram with a device driver there are two other options here which are actually available often by calling ioctyls with the with the right parameters on a file you've already opened so one is a non-blocking interface and that's the don't wait interface and what happens there is if you do a read or write and you say i would like five bytes it will look and it'll immediately return regardless of how many bytes are available and it potentially will give you back zero if there's nothing available or maybe if you asked for five it might only give you three so this interface is intended to be used in a polling fashion where what you're gonna do is you're gonna keep asking until you get what you want but you don't wanna block you wanna be doing something else and then you come back and ask again if you didn't get everything you want so that's the don't wait interface okay and oftentimes you can turn a blocking interface into a non-blocking interface with the right eye octal calls okay finally there's the asynchronous interface which is a little different than non-blocking asynchronous says tell me later and so what you do there is you give it a buffer and you say i would like 10 bytes and it will return immediately regardless of whether the data is there but then later via something like a signal it'll say hey your data is ready and at that point you can look in the buffer so notice how the top two here are very similar to what you're used to okay the bottom is very different and that you've given it a buffer and then later you go back and look in the buffer okay so these three things are the interface from the user to the kernel okay the interface between the kernel and the device is is what's handled in the device driver and that's very very asynchronous because it's all event driven and um the notion of blocking and non-blocking putting things to sleep is really a notion of the process level above at the user level all right did that answer that question so um now uh let's talk about storage devices because uh they're they're our topic now and we're gonna move into file systems afterwards um there's there's at least two qual types of storage device that you're gonna run into on a daily basis magnetic disks and flash memory um if we were 20 years ago i might say tape okay i have a randomly scattered tape in there to see if anybody would notice but tapes are much less used than they used to be but the notion of a a magnetic disk is really storage that very rarely becomes corrupted it's very large capacity it provides block level random access and i'll tell you about a shingle magnetic recording in a moment that is a little different than that um the performance is very slow if you try to randomly access it but it's still possible and it's much better performances for sequential accesses okay um and smrs have very good storage density yes indeed so flash memory uh is it's slightly different okay so in flash memory this is uh becoming increasingly high density excuse me it's still about five times just cost but those they're converging block level random access is very fast uh good performance for reads a little worse for writes in typical flash um and uh it's got some weirdnesses that you probably haven't thought about in terms of how to overwrite blocks okay and the most important thing for me i would say from flash memory standpoint is a wear problem so if you write flash too often you can actually wear it out right and it'll stop losing bits okay so let's look at hard disk drives so hard disk drive is kind of fun to open up if you were to open one up uh you you'll make sure you copy all your data first because you will um not only void your warranty but you will avoid your data but if you look on the inside there's a set of platters and a set of heads okay and i show a picture of a rewrite head over here on the far right and those heads are pretty sophisticated okay and they move as a whole in and out to reach different parts of the platter and they move together so you'll have a head on each side of the platter and then a head on each side of every platter and then they move together to get into different tracks which i'll show you in a moment okay and what's kind of fun is the ibm personal computer way back when had about a 30 megabyte hard disk for 500 bucks um we'll show you some modern equivalence uh like an 18 terabyte drive which has a much much more data on it right i always like to show this because this is fun when i was first starting as a faculty member these were new drives that had just come out and this is a form factor for flash for cameras okay it's the larger form factor than you get today but inside of this little chip is actually a single spinning uh platter with with double-sided heads and um it actually you could plug this into a camera and the camera wouldn't know the difference between this and a regular flash drive and it's actually a disk drive and at the time you could get four gigabytes out of this and you won't get anything close to that out of flash so this was a huge increase in uh density for uh that form factor um pretty cool now um they stopped being made probably in 2004 or whatever because they maybe six it got to the point where flash was far more uh dense and so this this kind of lost its uh its market okay now so what let's look a little bit more about disks okay so a series of platters they're all in a spindle spindle rotates as a whole and it rotates at a constant speed except for starting and stopping and the reason for that is there's a lot of momentum angular momentum in this and so it takes a lot of work to spin it up and spin it down and so you can't make it faster or slower while you're using it you usually only spin it up and leave it because spinning up and down takes a lot of energy and then you have the heads and the heads are at a particular part which is called a track and so if you if you take a full ring uh which is what happens if you leave the head alone and you spin spin the disc that's the whole thing is called a track all right and everything underneath um is a cylinder so all of the the whole rings that are together uh that's called a cylinder any individual surface has a track and then these little chunks called sectors are the minimum transfer piece for a disk and so these sectors up until fairly recently were almost all 512 bytes and the operating system would combine a bunch of them together into something we call a block which would be 4k today a lot of the really high density discs now have a sector size that's closer to 4k okay so disc tracks can be a micron wide which is close to the wavelength of light the resolution of the human eye is 50 microns so you can't even see all the tracks here um and so you can get 100k or more tracks on a typical disc which is pretty impressive um and uh typically the tracks are separated by unused guard regions that make sure that while you're writing one track you're not messing up the data on a an adjacent track right so the track length interestingly enough varies across well that's just um you know that's just because we're talking about a circle here right so on the outside the size of a track is larger than on the inside so hopefully that's uh not too surprising to anybody what is surprising is the following if we were to use time to define our sectors so you basically you write for a little while uh and you write your 500 bytes for some amount of time and that's your sector can anybody tell me about the difference in size of a sector between the inner tracks and the outer tracks yeah the outer sectors would be larger and if i have 512 bytes on an intersector and i look at the outer sector are the bytes or bits let's say on the outer sectors would they be as close together as they are on the intersectors okay the answer would be more space and it's actually not going to be more space between bits but rather the bits are going to be longer so um that was the way the original discs work but that wastes a lot of density on the outside because what's defined what defines the amount of storage you can put on a disc is how densely can i put the bits together in this magnetic media and still get them back when i'm done because obviously we want to have our disks not be right only right that would be kind of unfortunate and so using modern uh digital signal processing what happens is we can actually on the outside we write the bits faster than on the inside to keep the density constant okay and so the density of bits per uh per square inch is basically the same across the whole disc head across the whole disk surface and to do that we write faster and therefore there are actually more sectors on the outside than on the inside and the bit rate is higher on the outside than the inside so if we were really interested in high performing the highest performing disk drive aspect for a given disk drive we could write on the outside tracks instead of the inside tracks all right now today the disks are so big you can put so much on a disk that the time it takes to pull all the data off the disk is so long that you you can't justify backing data up that way it just takes too long and so a few years ago companies like google started doing the following they would keep archival data on part of the disk and uh active data on a different part and that was just so that they could back up the active data and they wouldn't even use the whole disk for active data okay and that's just because it takes so long to pull all the data off they're so big now um an interesting variant i will say is the way i've been describing this is every track is separate so it's a set of cons concentric rings okay and um single magnetic recording is a little different and what we do there is we actually write over every track writes over half of the previous track okay and the reason to do this is a you get the tracks closer together and now you might say but wait a minute now i'm intermingling the track track n and track n plus one and the reason this can work is basically because a really good dsp can figure this out okay and figure out what the bits are however the downside is with whereas with this i can rewrite individual sectors anything i want i could write a few you know i could rewrite this sector and then go over and rewrite that sector and rewrite a sector somewhere else and not have to disturb anything else on the disk with smr i get a lot of density but i have to rewrite whole regions because i if i want to change anything in say the top track i have to write it and then i have to write the other tracks okay the larger rectangle at the bottom here is just you're talking about on uh the conventional right at the top here nicholas so this is showing you the difference between a regular uh system where our tracks are defined by these uh gray um things here and whereas uh the the shingle overwrites each other okay and the overlapping tracks are what we're talking about here are you talking about this very bottom one very left of the diagram i'm not sure which one so the the larger right rectangle down at the bottom here is just showing you what's continuing this is not this is not saying that uh um we don't overlap this one at some point we have groups of these shingled um rights and there is a bottom one and then we put a bunch of space and so on because that defines sort of the maximum that we have to rewrite to write something in the middle okay oh this guy um this is showing you that when you write you need to have a large rectangle because the the writing head spans a larger amount of space the read head can look at a very narrow space so that's kind of showing you how much of the disc gets modified if you look the writer is this wide thing there all right okay the other thing i wanted to say that's pretty interesting here is these discs are all hymetrically sealed okay which means you can't open them up and uh part of the reason is that this is spinning very fast and these heads are actually flying on a um on air just above the the disc okay so they're they're actually floating a little bit above because of the speed of the disc is causing uh causing a an effect that kind of like a bernoulli effect almost it lifts the head off just enough uh so that it's very close so we can get very dense recording now today uh the bits have gotten so dense and the so that the discs have to be close so close to the heads that they've and they need to spin them up so fast that they've started actually using helium instead of just regular air in there so they pump it out and they put in helium and that's basically what's inside those disk drives now so if you open it up you're going to completely break it okay so if we look at a disc here now we can define it by a the cylinders so that's all the tracks up on top of each other and remember the heads are moving as a group together and then we can talk about the seek time which is the time to move the head in to the right cylinder um and so suppose we wanted to get some sector on the top of the top platter what we would top side of the top platter what we would do is we would first move the head into that track then the rotational latency would be we would wait for uh the sector i want to rotate underneath the head and then last but not least we would transfer the bits that are under the head and that would give us our data okay and so if we wanted to sort of model the time here what we would say is well look we've got a queue we've got the controller and we've got the disk and so the time to get the request out would be the time that it sits in the queue and we'll i don't know if we'll get entirely to cues uh today i think we might but um the time it sits in the queue the time it gets through the controller okay that's queuing time controller time and then on the disk itself the time to seek the time to rotate and the time to transfer okay and as you can imagine the rotational length latency is going to be defined by the probability of where you are on uh you know on the track when you get there so if i were trying to model rotational latency in this equation what would i do i mean how would i do that does anybody have any thoughts yeah very good we would start with uh taking the rotational time which is defined by how fast that's spinning so a typical time is like 7200 rpm or 3600 rpm we'd use that and that would let us figure out how long it takes to go all the way around and then on average we'd say it takes half that time and that's the number we would plug in there to the rotation type good so here's some typical numbers just cc so space or density so space might be 14 terabytes actually i'll show you an 18 terabyte one in a moment that just came out uh literally this month this old one from a couple years ago had eight platters in a three and a half inch form factor which is pretty crazy the density which is the number of bits in a square inch is one more than one terabit per square inch which is just nuts and that's with helium filled disks and a uh vertical uh recording domains where the the actual bits themselves kind of go into the platter rather than uh sideways the average seek time is somewhere from about four to six milliseconds so if you look here that's how long it takes on average to move the head around to get it to where you want okay um the average rotational latency so most desktop drives are in the 3600 to 7200 rpm the faster you go the more energy you use and that's one of the reasons that helium is used because it provides less resistance and so you can go faster with less power server disks typically get up to 15 000 rpm so you can imagine that the server discs are using a lot of energy but are faster okay and uh in the you know 3600 is about 16 millisecond rotation time okay um controller time depends on the controller hardware um the transfer time can be somewhere between 50 and 250 megabytes per second to transfer um data off the disk okay and the transfer size at minimums a sector which is 512 to 1 kilobytes but usually the dis the operating system pulls many uh together and so it will never transfer less than say four kilobytes in a row at a time okay all right um the diameters range from an inch to five and a quarter inches but really the three and a half and two and a half inch form factors are pretty uh pretty common these days okay and the cost used to drop by a factor of two every uh one and a half years it's slowing down a little bit all right now here's some performance so let's uh we have to ignore queuing time because that's going to take a whole discussion and controller time is easy to imagine but let's see if we can figure something out here suppose the average seek time is 5 milliseconds if we have a 7 200 rpm disk so the time to rotation is 60 000 milliseconds per minute okay over 7 200 revolutions per minute gives us about eight milliseconds to go all the way around okay and notice how i've got my units set up this is something you should remember from high school chemistry so i have milliseconds per minute and revolutions per minute the minutes are going to cancel and i end up with milliseconds per revolution all right um if a transfer rate is 50 megabytes and the block size is 4 kilobytes then i can put all this together and i can find out that it's about 0.082 milliseconds to get a sector out okay all right now to read blocks from a random place on the disk notice how um this is going to be uh seek time uh rotational delay and transfer time and if i put those all together that seek time of five milliseconds is is expensive and so we're going to end up with about nine milliseconds and so notice the transfer time actually is hardly even in the picture here the seek time and the rotational delay which is half of eight milliseconds notice is the thing that's really costing us here and if we uh randomly go on disk we can get about 451 kilobytes per second out of that on the other hand if we read from a random place in the same cylinder notice that we don't have to seek because we're in the same cylinder we get the rotational delay four milliseconds transfer time 0.08 milliseconds now we're up to about a megabyte per second notice the difference we almost doubled our uh bandwidth coming off the disk just by getting it from the same cylinder so you can see it's extremely important to avoid seat type and as i mentioned earlier seek times can be up um in eight millisecond range as well reading the next block on the same track which is basically no receipt time no rotational delay we can get that 50 megabytes per second back so notice that this is going to tell us something if we build a file system out of disks it's going to be extremely important to do as much sequential reading as we possibly can and then if we can't do that staying on the same track and then if worse comes to worse seeking and so we're going to want to build our file systems to really do a good job of keeping locality on the disk otherwise our performance is going to go way down and when we start getting into file systems you're going to see why that's important okay now i just said that so lots of intelligence in the controller so sectors have all sorts of sophisticated error corrections so there's far more bits on the sector itself including an error correction code than um than you actually are writing on the disk and they help to to find the bits when they get errors in them we can do something called sector sparing which is uh take bad sectors and transparently use something somewhere else on the disk without telling you okay we can do slip sparring which is uh remapping a whole bunch of sectors to a completely different track if there's a problem we can skew our tracks so that the sector number is offset from one track to another all of this stuff is done by the controller so although we're going to talk about ways of building file systems to optimize for the physical location of the heads on the disk there is a lot of intelligence already in a modern controller that's going to be competing with you and so that's something we're going to talk about when we get to that point okay now hard drive prices over time have done really well um up until about the 2012s or so and then it starts started um flattening out a little bit and part of this was that they were getting to be so large that um that there was a much smaller market for those really huge discs another problem that was really rearing its head throughout the early 2000s was uh that the bits were getting so close together that um the uh just the random energetics of heat uh would scramble your bits and you would lose them if you tried to make the bits any smaller one of the things that made a really big really big advance on that was vertically recording the domains it really helped a lot to make things more dense now i want to show you a current hard disk drive if you wanted to know what the state of the art is so the seagate for instance an exos x-18 this is a couldn't be a server drive but it's a three and a half inch platter and 18 terabytes it's got nine platters and 18 heads it's helium filled to reduce friction it's got a four millisecond average seek time uh the the sector itself is four kilobytes um it's 7 200 rpms it's got very fast um interfaces so for instance if you get the sas interface you can get dual 12 gigabit per second off of it um and you can sustain 270 megabytes per second coming off the disk okay so um the other thing is there's actually dram cache on the disk itself to help make things faster 256 megabytes so if in case you were under the impression somehow that um a disc was just a simple thing with a bunch of platters and a head on it in fact it's much more than that these controllers are extremely sophisticated there are many miniature os's in themselves and there's even caching on on the controller and notice that the price for this guy i just looked it up on amazon 562 bucks that's about 0.03 uh dollars or three cents a gigabyte if you look at the original ibm personal computer it was a 30 megabyte hard disk um the seek time was 30 to 40 milliseconds notice that's a factor of 10 difference um you could get maybe 0.7 or 1 megabyte per second off of that so compare that with 270 and then the price was 500 so it wasn't all that different but because it was so small we were talking telling about uh 17 000 per gigabyte so that was a a lot more uh cost per per byte so you guys have it easy these days now uh let's talk about a different type of disk so are there any other questions about spinning storage this would be a good time to ask if there was something you were wondering about what's the cash for well the cash among other things helps make the access to the disk a lot faster so remember when i said that uh if you were to randomly read it's really slow so what happens typically is these caches are actually used for what are called track buffers and so when you go to do a read it actually reads the whole track into the cache and then when you go and read random parts off the tr the track you get much faster access so this is different the question is is this the same as a hybrid disk and the answer is no a typical hybrid disk actually has flash memory on here as well and the good thing about the flash memory is it means that writes are really fast and don't have to be committed to the spinning storage immediately so you get much faster access out of it all right good now solid state disks uh have been around for forever so in 1995 they started coming out as a way of replacing basically rotating media with non-volatile memory originally that was dram okay and it was dram with a battery so if you look on a card like this there's there was typically a battery back there that basically kept the drams contents when nothing was uh on okay but around 2009 we started getting nand flash memory which had a couple of levels to it and started making the the uh the flash dense enough to be interesting as a storage media in and of itself and the idea behind nan behind flash in general is that trapped electrons distinguish between one and zero and so when you program flash you're actually trapping some electrons if you want a one or not trapping them if you want a zero and that's how you distinguish okay and what that really tells you hopefully is that before you can write you actually have to erase everything which is get rid of all the electrons and then you selectively write them and we'll say more about that in a moment the positive thing about this is there are no moving parts so the failure modes are at least in theory a lot better than a system with motors that are running it turned out originally the flash disks in say i'm going to say not originally but let's say in the 2000 maybe 12 time frame where people were really starting to put them on laptops and so on because they were such low power and they in theory were more reliable it actually turned out that there were some companies that had some weird failure modes that would just all of a sudden take a um i'm going to say a you know a 100 gigabyte flash disk would suddenly look like it was only eight kilobytes and all your data be gone that happened to me on one of my laptops where i was an early adopter on flash memory fortunately the ssds are much better now okay rapid advances in capacity and cost ever since the downside of ssds they're good on power they're a little slower to write than read but they also wear out so the more you write them the more you lose your data okay and so that's a that's a slight downside to ssds now let me just show you a little bit about how this works so typically you have a host which is the cpu talking over a over a data bus like sata and you have some uh on the controller you actually uh you have in the controller you have a buffer manager which makes it look like a disk drive so that the host can ignore that it's something separate if it wants to and then you have the flash memory controller also on that that controls all the flash and what the flash memory controller does is it reads or writes four kilobyte pages maybe say 25 microseconds or so and what's interesting about that is that means that even though in principle you got all the bits are stored individually you still have four kilobyte pages that are coming off so it looks a lot like a disc from that standpoint except we don't ever have any seek or rotational latency because we're we're not moving ahead in and we're not having to wait for things to spin okay so you can imagine that random access is much faster here in general okay and our model for latency here is cueing time plus controller time plus transfer time um and this has the highest bandwidth regardless of whether you're sequential or random so that actually has some impact on how you build a file system because you don't have to do that optimization for locality you did otherwise and i'm going to make sure to have a couple of slides that i'll put in when we talk about file systems about how this changes file systems because there are some new ones that are related to this all right now uh writing is a very complex operation okay because in order to write first of all we have to have empty pages okay because we can't write uh over something that's already been written because the only thing we can do is add electrons so the right the erasing process is high energy uh removal of the electrons and then you can add the extra ones to do the rights and furthermore you can only erase in big chunks okay so the big blocks that you erase might be for instance a 256 kilobyte block and then you can write in four kilobyte pages okay and so you can imagine that one tricky part about a file system for this is we need to make sure we have enough erased blocks that when we're ready to to write some new blocks we can find enough pages to to deal with it and then we have to make sure that when we're done with all of the pages in a block or we have to track enough to know when we're done then we can go ahead and do the erasing so that they're ready for the next time we need them so the free list management on um the ssd can get tricky okay because it's not just blocks it's also there's not just pages it's also blocks okay the other thing is that the rule of thumb on flash is that uh erasure is about 10 times uh the speed of rights and rights are about excuse me erasure is about 10 times as slow as writes and writes are about 10 times as slow as reads so it's really slow to write it's fast to read and so this actually has that variation where rights uh are slower races are a lot slower and so you have to be keep that in mind if you can to try to avoid writing until you really need to the other thing is rights take power okay and so the more you write flash you're using a lot more energy than reading okay so rights do not include erasure no all right so you have to do erasure separately now um so the architecture ssds give you the same interface as hard disk drives uh to the operating system so you're reading and writing chunks of four kilobytes oh by the way some of that erasure interface is hidden in the controller and so the reading and writing just to some extent the os can ignore this distinction but if an os really wants to do the right thing it wants to know about this uh this distinction okay but the part of the the ssd controller helps helps you a little bit with this so you can only overwrite data 256 kilobytes at a time you can never overwrite a page that you've written before it's got to be erased first um so you might ask well why not just have 256k blocks and uh just erase everything at a time and then rewrite the whole block and the answer is that erasure is very slow and if you're not modifying bits you absolutely do not want to write them because you're going to wear it out okay and so really this distinction between the size of the eraser and the size of the read and write is something that you want to keep in mind when you're dealing with this okay now there's a couple of things that ssds provide for you so one of the things is on the flash controller there's actually a layer of indirection it's kind of very analogous to what we just came through with virtual memory so there's something like a page table that maps the operating system's view of block numbers to the underlying ssds view of which flash blocks are being used okay and so that layer of indirection is there and helps hide the weirdness of the flash from the operating system okay the other thing is it gives you the ability to do copy on write under the covers so really when you go to write a page what happens is the os you actually end up writing a different page and then you remap it and so that the old data is now basically garbage collected and the new data is mapped into the same block as before and so this flash translation layer helps hide the underlying properties of the flash all right so uh flash translation layer i guess i already said this no need to erase and rewrite the entire 256k block there's a lot of that's handling this okay and uh yes as as uh said on the chat here everything in in cs is a layer of indirection um what do you do with the old versions of the pages they get garbage collected in the background um in old blocks that have uh no active pages in them get erased and put on free lists and so on okay now i wanted to show you some quote-unquote current ssds so here is uh the seagate exos ssd this is from a couple of years ago but um they haven't actually updated this family uh right now but this is 15 terabytes um it also has the dual 12 gigabyte interface like that exos drive i showed you earlier notice that the sequential reads and writes are up in the much faster okay writes are fast because they're basically going to blocks that are already free but notice this is like 860 megabytes per second as opposed to 270 so this is like a factor of three faster um and uh amazon's price for this particular disk is 54.95 which gives us about .36 gigabytes or dollars per gigabyte as opposed to three cents per gigabyte um like we said earlier so 36 versus three this is my favorite uh hard to believe drive so here is a disk drive and i say that in quotes that's the same form factor as all the other ones you're used to but it's 100 terabytes okay that's a hundred terabytes and it can do 500 megabytes per second and it's about 40 000 which is about 4.4 gigabytes per second so about excuse me 0.4 dollars per gigabyte or 40 cents per gigabyte okay and what's really interesting about this is despite the fact that these guys wear out if you write them too much this company actually guarantees that you can have an unlimited number of rights to this drive for five years can anybody guess why even though flash wears out that they could tell you you can have unlimited rights for five years why would they even give that as a warranty if flash wears out yeah so the problem here is to fill out to fill up this drive uh is going to take way too long to do and so basically uh you could be writing at maximum speed for five years and you wouldn't overwrite things enough to wear them out okay and so they're they're comfortable saying you can write as al all you want for five years and you'd be fine all right and uh notice part of that is the flash translation layer every time you write the same block and i say that in quotes you're really writing different blocks and so it's doing what's called where leveling where it's making sure that as you overwrite things it's making sure that every one of those pages on in all of those hundred terabytes are all used equally uh well and so if you were to try to write uh at your absolute maximum rate for five years you'd never get anywhere close to wearing any of the bits out and so they can actually make that guarantee but um anyway that's my uh my favorite ridiculously large drive okay so um let's see so basically hard disk uh cost and uh and uh ssd costs hard disk ssds have been basically going toward um merging for a long time and they're pretty much they're pretty close these days here i'm not going to go through that any much much more i wanted to tell you this which is kind of fun so uh if you're aware of the kindle so i'm sure all of you have seen them before they're a really cool reading device i love them myself the thing that's cool about them versus pretty much any other lcd device is that you can read them in full sunlight and so if you're a fan of books you get yourself a real kindle you can kick your feet up in the sun and just read and there's an amusing calculation you might ask which is suppose that i take an empty kindle right after i bought it from amazon and i fill it with books is it heavier okay so that seems like a ridiculous question but let's answer that and the answer is actually yes but not much okay and so let's go through this so flash as i mentioned works by trapping electrons so the erase state is actually lower energy than when you write a one on there where you put some electrons in there and trap them so you got higher energy for one of the bits okay it doesn't really matter whether those are ones or zeros and assuming for instance the original kindles came out with four gigabytes of flash um if you imagine that a full kindle half of the bits are uh ones and half are zeros then half of them are of high energy state and you can compute for a typical flask transistor what the high energy state is it's about 10 to the minus 15 joules so a full king kindle is about uh one at a gram heavier than an empty one and you're you can use actually uh e equals m c squared here uh with the energy to come up with how much uh weight it is so it's actually heavier except except that of course 10 to the minus 18 grams or an atogram is um unmeasurable because the the best measure best scales out there can't measure something finer than 10 to the minus 9 grams so uh the other thing is there's a whole bunch of other caveats so you have to take the kindle set it to a constant temperature uh fill it with books uh cool it back to that temperature recharge it and then there'll be a 10 to the minus 18 gram so this weight diff difference ends up being overwhelmed by battery discharge and all that sort of stuff but it's amusing nonetheless and my sources by the way are this guy john kubatowicz there was a new york times uh column in 2011 which was pretty funny so the new york times called me up and said we have this question from a somebody reading our column and they'd like to know if kindles are heavier when you put books in and so i wrote about why this was all right so you can this is a great party thing right so one of the things i love to do in 162 is i like to help you all out with parties now of course unfortunately our parties are all virtual these days or they should be but um you know you can imagine that you're you're on your zoom with with the other 50 people in your party and all of the parties have too much milk yes that's true and uh and then you can say did you realize that when you fill a kindle with books it's heavier all right and you'll be you'll be the most popular person at your at that party okay so what about ssds to summarize so the pros versus hard disk drives so they're low latency high throughput um we can completely eliminate the seek and rotational delay there's no moving parts so they're much very lightweight the power is low they're silent it turns out they're extremely shocked uh insensitive so you can drop things without jarring the bits um by the way you can't quote me on uh dropping a laptop and being okay i'm just talking about the ssd you can read them at memory speeds essentially although the writes are are a little slower the cons are that the storage is small relative to disks but as you can see ssds if you're willing to pay exorbitant amounts of money you can get um very big discs okay so in fact that small storage thing isn't really true anymore um and the hybrid alternative that was asked about earlier is to combine small ssd with a large hard disk okay and that really what that does is it gives you the ability to do really fast writes to the disk without having to seek and really fast reads it serves as a cache okay and so some of the other cons though is there's an asymmetric block write performance so you have to uh read page erase write page to really change any data on on a disk or on a block and the the drive lifetimes a little bit limited so you're limited to about 10 000 writes per page for modern nands and so the average fail rate is about six years life expectancy maybe nine to 11 years but if you write a lot and you don't have an extremely huge drive like the one i showed you earlier there really is a danger of losing some bits okay things are changing pretty rapidly though now one thing i did want to show you is another option which is kind of fun which is nanotube memory so this is uh something so nanotubes unfortunately perhaps my uh my camera image is covering this up but nanotubes are made out of carbon molecules and they're they're uh tubes of carbon okay and you can put a bunch of them in a pattern pattern and you can actually arrange so that they're either randomly uh together or they have uh they're attracted one way or another and so you can actually have two different uh resistances that you can detect and that gives you ones and zeros and there's a way to uh clear by erasing which basically means put it back into the uh you know one of the states and the interesting thing about this is this doesn't wear out okay because you're just moving the nanotubes around and so um it doesn't wear out like flash it's uh persistent so you don't have to worry about losing the contents and it's as small as dram cells okay and so there's for instance a company called nantero which uh has been very close and been working with dram manufacturers to produce um these cells and this could potentially replace dram because it's as fast and dense as dram holds its contents and uh doesn't have a wear out problem so that's pretty exciting possibility to come up soon i think this is going to fundamentally change the way people think about memory once this becomes mass-produced and they had already figured out how to pretty well produce these and they were working with several uh dram manufacturers a couple of years ago so of course who knows exactly what's happening uh because of the pandemic has sort of screwed everybody up but this will be fun all right so let's shift well unless anybody had any questions on devices i want to shift gears to some performance to talk about that are there any other questions about devices so this uh uh nanotube memory is actually uh three-dimensional patterning as well as possible so this will be really dense okay so the difference between pcie and sata3 is those are two different buses uh pcie uh is used for uh is a pretty common interface to plug cards and stuff in whereas sata 3 is something that was set up to um specifically for disk drives and so they're for slightly different uses dna storage has been interesting for a long time but i haven't yet seen a good uh proposal for how to make it as dense as regular dram yet but of course we all know that dna is very uh dense but that would be fun at some point do any of these use less uh heavy rare toxic metals that's a really interesting question um i'm not sure the answer to that the nice thing about nantero's nanotubes is the biggest thing here is carbon which it'd be great to extract that from the atmosphere and use it but in terms of things like cobalt and some of these other things unfortunately patterning of chips is is not necessarily as environmentally friendly as one might like but i don't have any reason to suspect that this nanotube is is worse than other ones and it might actually be better so that's a good question though so let's talk about performance for a moment so when we're talking about these discs or we're talking about schedulers or whatever there are several things we might talk about and i thought i would just put these on the table for a moment so for instance latency time to complete a task it's often measured in units of time seconds milliseconds microseconds maybe hours maybe years right response time is kind of the time to initiate an operation and get the response back so latency is time whereas response time often is a round trip right it's from the time there was the quest went out to come back okay and sometimes the ability to issue uh the uh the next response or the next request might depend on when you got the response because not all systems can handle pipelining of requests okay a different thing is throughput okay so throughput or bandwidth is typically the rate at which we can send tasks or bytes those are two possibilities uh into something okay and it's often measured in units of things per unit time so like operations per second or giga giga operations per second or bytes per second megabytes per second so often in networking you might see megabytes per second right make it bits per second um and then another thing which uh ties into all of these is the startup or overhead which is often the time to initiate an operation now overhead fits into latency of course but if you can pipeline and send several things at once sometimes you can only pay the overhead at the first one and then the rest of them are run at full rate now most i o operations are roughly linear um where if you have b bytes the latency is the overhead plus b divided by transfer capacity and so that overhead actually directly impacts your latency and i'll show you that in a moment when somebody talks about performance the first question you ought to ask is uh what am i measuring you know and is it relative to something so for instance performance might be operation time it might be rate it might be any number of things so you could talk about uh low latency is a high performing thing or you could talk about high throughput being a high performing thing okay um let's say you're talking about what this is this is globe glops i think that's just a typo sorry about that so for instance in a network suppose we have a one gigabit per second link everybody's got those you probably got them on your laptops the bandwidth might be 125 megabytes per second right so this is gigabits per second per link megabytes per second okay that's just dividing one gigabyte gigabit by eight all right suppose the startup cost is a millisecond we could take a look at a graph like this so notice this is a double headed graph it's got packet size on the bottom it's got latency in blue on the left and bandwidth in red on the right and if you notice the latency because this is linear the latency is really uh the startup cost plus the size of my packet b over the bandwidth so here's my size of my packet what i showed you there for latency is a nice linear graph and notice that at the zero inter intercept uh there's a minimum of a a thousand microseconds or a millisecond because that's my overhead and so if i were to look at the bandwidth of this the effective bandwidth yeah this thing is a gigabit per second or 125 megabytes per second but if i were to look at the effective bandwidth taking overhead into account i get this this red curve all right and i just take the packet size divided by the latency okay to send that packet and that gives me effectively bytes or bits per per second or whatever i'm measuring and it has this shape to it okay and this shape um starts out low right because my bandwidth starts out at zero for small packets and that's because the overhead's so high once i make the packet big enough then my bandwidth starts getting higher and in fact at some point it levels out because no matter how big my packet is i can't go faster than uh the the raw 125 megabytes per second okay and so one place that can be interesting here is what's called the half power bandwidth which is the point at which my effective bandwidth is equal to half of my total bandwidth all right and that's uh for instance here if my packet is 125 kilobytes then my effective bandwidth is at half of my full bandwidth okay um so just because you have a gigabyte excuse me gigabit per second length doesn't mean you get a gigabit per second in fact you often don't unless you have really big packets uh what's also interesting here is if our startup cost is 10 milliseconds notice how i had the overhead of one millisecond here if i change it to something more like a disk say 10 milliseconds and i do the same computation what you find here is that the half power point is not until 1.25 gigabytes in size so i have to have really really really large packets before i come anywhere close to getting half of my native bandwidth so um that's a problem okay um oh yeah sorry this is 1.2 megabytes my apologies i added three extra zeros in my brain there okay so overhead really matters and see this huge this huge zero packet size latency gets into play and so when we want to do a good job of optimizing things when we start building file systems and networks and stuff on top of devices we're going to have to be very sensitive to the overhead so what determines the peak bandwidth for io so that was for instance at you know one gigabit per second well it's uh the hardware and so you can look at a bunch of buses we've talked about things like the original pci buses um was 133 megahertz at 64 bits per lane um thunderbolt which is a usb c style connection 40 gigabits per second so the the bus speeds have been continually getting bigger the device transfer bandwidth is going to give me my peak bandwidth off of a disk okay and so that has something to do with the rotational speed of the disk or the right read rate of the nand flash that gives me my peak bandwidth which is what i start with in a calculation like this so my peak bandwidth is just one gigabit per second and then the overhead takes over okay and so that peak bandwidth comes in many forms and whatever the bottleneck is in the path is the thing that's going to limit my peak bandwidth okay and we're going to talk a lot more about this next time so the overall performance for an i o path which is where we're going to want to get might look like this you have a user thread they make system calls and their request gets queued and then eventually goes to the controller and the i o device i already showed you this um earlier when i was talking about the disk drives the interesting thing that's uh the elephant in the room we haven't talked about is this q the mere existence of the q with uh random inputs times causes this curve okay and so hopefully by the time we get through our discussion on cueing theory you'll have a much better idea why this curve goes up as we get closer to 100 percent so that 100 percent we're first going to try to understand what 100 throughput means or utilization and that's really finding our peak bandwidth that's possible to get through the device and getting as we get close to that in our requests you'll find that it isn't that we linearly increase uh but instead we get this behavior where the curve actually climbs toward infinity if we're doing this in modeling as we get close to 100 percent and we're hopefully going to try to explain that but for the for the time being what's important is the fact that this curve is very non-linear it's not linear like i was implying with these previous slides and so if it's non-linear you're going to want to be careful you're never going to want to be operating over here because your latency is going to be ridiculously high just to get a little bit more performance out of the system a little bit more utilization and so instead we're going to want something more like a half power point or the point at which we stop kind of doing a linear gain with utilization and start getting into the rapid growth okay all right and we're going to explain that more so just to start the discussion for next time sequential server performance is kind of what you think about when you say well it takes i have a request this blue one it takes l to complete and i have a series of them and as long as the server being a disk or whatever can handle uh l of the you know i can handle this at the rate it comes in i'm good to go okay so a single sequential server that takes time l to do a task operates at a rate that's uh less than or equal to one over l on average in steady state so notice that i'm getting maximum behavior out of this server because i'm putting these uh l items together and um and i'm putting i'm squishing them together as tightly as possible and so for instance if it takes 10 milliseconds for me to process something then the maximum rate i can get out of that server is going to be 1 over l or about 100 ops per second if l is for instance 2 years it's possible i'll only get 0.5 operations per year okay and so this latency l to do a to do an operation in the server is going to be something we need to compute and that's possibly related to things like you know uh seek plus rotation plus transfer on a disk or transfer time off of flash and so on okay but as you can imagine this is looking really nice and linear but that crowd graph i showed you earlier wasn't nice and linear another version by the way of something simple here is a pipelined idea where you've got three operations you've got to do three things each of which takes time l and i can do them in different stages so i first do the blue then the gray then the green and i can pipeline those in the following way okay this probably rings a bell from 61c but in that instance depending on how many pipeline stages i've got or k pipeline stages my effective rate is higher okay because if l is 10 milliseconds but i can do four stages at a time i get 400 ops per second rather than what i had as a 100 ops earlier so we're going to want to start analyzing our systems as can we get any pipeline out of them as well okay and i think examples of pipelines are all over the place so for instance you know here's the user process causes assist call which queues in the file system which then goes into the upper device driver which queues there which goes in the lower device driver and so on or in a network we've got communication there's a whole bunch of cues throughout the network so anything with cues is going to start invoking queuing theory um so we're going to have to analyze it there and you're going to find out that unlike what i just showed you it's not linear it's going to have that unfortunate curve to it all right and we're going to hope to identify that as we get forward and unfortunately real systems have these cues and have that non-linear behavior so it's not synchronous or deterministic like it was in 61c all right i'm going to let you go but in conclusion we we talked about notification mechanisms today we talked about interrupts and polling where polling is reporting the results by actually asking the status register what's going on and we talked about how to we can we can combine interrupt and polling to maybe get lower overhead we talked about device drivers which interface to the i o devices and give you a clean read write open interface to the operating system above and they manipulate devices through things like program dio that's where the processor reads uh each thing at a time or dma and we talked about the three types of devices that device drivers have to deal with we talked about block devices character devices and network devices we also talked about dma to permit devices to directly access memory so typically the device driver running in the operating system asks the device go ahead please transfer this data to that part of memory and tell me when you're done okay and one of the things we didn't talk about today but you can imagine is oh actually we did talk about it is while that transferring is going on it's possible that either the operating system had to have pre-invalidated the cache or the dma has to invalidate the cache as it goes we talked about disks and disk performance we talked about queuing time plus controller time plus seek time plus rotational plus transfer transfer time we talked about rotational latency being a half of a rotation on average and the transfer time is depends on the rotation speed the bit storage density and as we talked about it depends on whether you're reading from the outside track or the inner one devices have very complex interactions and performance characteristics we've just started this discussion so the queuing plus the overhead plus the transfer time and that's our latency okay and we talked about how overhead can make a huge difference and you need large block sizes to deal with that and then we talked about how different devices like a hard disk versus an sdd basically have different uh performance measurements all right and systems as i've already alluded are basically going to be designed to optimize performance and reliability um and that means we need to know something about the underlying devices so even though we have these interfaces to shield us from knowledge we need to know something more about the devices to really use them at their maximum performance all right and what we're going to find out next time is that bursts and hydrolyzation introduce all sorts of queuing delays and that's going to be the source of that growth without bound in our performance curve from earlier all right i think we're good to go for today i'm going to let you go and um that's ssd that's a typo good catch and so i'm going to wish everybody good luck on tomorrow's exam i'm sure you all do well and we'll see you on monday |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_17_Demand_Paging_Finished_General_IO_Storage_Devices.txt | welcome back everybody um we are going to continue and finish up our discussion of demand paging a bit and then uh move on and talk about some io it's hard to believe we're already on lecture 17. but anyway welcome to cs162 uh if you remember last time we were talking about the notion of using the virtual memory system to to build essentially a cache uh which we call demand paging and we came up with this effective access time which looks very much like the average memory access time and the key thing to note is this uh simple equation here which basically says uh memory access time from dram say 200 nanoseconds um page fault going to the disk maybe eight milliseconds and we built uh keeping our units constant of course we built ourselves an effective ad access time and what we see there is really this value of p here that if one axis out of a thousand causes a page fault your effective access time goes up to 8.2 microseconds which is a factor of 40 larger than the dram so clearly one out of 1000 is not a good idea and notice i'm talking about dram here with 200 nanoseconds if we were talking about cash it would be even faster and a bigger slowdown and so we can do this uh slightly differently we can ask well if we want the slowdown to be less than ten percent then um what do we have to have for a page fault rate and we find that it can't be any larger than one page fault and four hundred thousand so uh this means that we really really really have to be careful not to have a page fault if we can at all avoid it which led us to basically considering our replacement policy as being very important to try to keep as much data that we need in the cash as possible now we went through several policies last time and we talked about how lru was a pretty good policy but uh impossible to implement and so we came up with this clock algorithm if you remember and the reason it's called the clock algorithm is because it looks like a clock we basically take every uh dram page in the system and we link them together so typically in an operating system like linux or whatever that means that every physical page or range of physical pages has a descriptor and those descriptors are linked together and we have a clock hand which says which page we're currently looking at and uh we're going to work our way through and on every page fault the clock algorithm says um what do we do well we sort of take a look at the hardware use bit which is usually in the page table entry and if it's uh a one it means that the page has been used recently and if it's a zero it means that it hasn't and so what we're going to do in general is uh we're going to advance the hand we're going to take a look at the use bit and if the use bit is 0 we're going to assume that it's an old page and therefore we uh go ahead and reuse it if it's a 1 we know that it's been used recently what do i mean by that well if we see a one and can't reuse that page we set that use bit to zero again and then we go on to the next one and we keep repeating until we find one where the use bit is zero and the key idea here then is that if we see something that's a one it means that the page has been used since the last time we came around the loop okay and so really what we said was yes this is not lru but it divides the pages into kind of two categories uh one that is recent pages and ones that are older pages and we pick an old page now it's the number of pages in the clock the number of total pages and the answer is uh it's the total number of pages uh in the system okay now the question here about is it uh the number of pages in the page table the reason that question isn't quite what you thought you were asking is that every process has a page table so there are many page tables in the system and each of them point at parts of this so what's in this clock is all of the physical pages not the pages in the page table okay because there are many page tables and the hardware does not set the use bit to zero unlike what was on uh in the chat here what happens is the hardware only goes from zero to one when the page has been touched the operating system sets it to zero and it sets it to zero uh when it's decided that it's not going to recycle that page it sets it to zero and moves the clock hand on to the next okay so the operating system sets it to zero the hardware sets it to one okay are we clear everybody and the other thing we talked about last time and you should go back and take a look is how to emulate this bit so the use bit and the dirty bit which is typically tells you that uh the page has been written uh both of those can be emulated in software if you're willing to take more page faults and i talked about that last time all right the other thing we talked about was the second chance algorithm which is uh has the same goal as a clock algorithm which is to find me an old page notice how he said that an old page right we're looking at an old page not the oldest page so the second chance algorithm has the same idea and this was uh designed in the vax vms where uh through various reasons the hardware didn't have a use bit and so this was a different algorithm than clock and the idea here is two groups of pages the ones in green are mapped and ready to use the ones in yellow are there and they have their contents but they're marked as invalid in the page table okay and in page tables and so now what happens is the ones in yellow are put together in an lru list the ones in green are handled fifo and uh what we do is the following so these green pages are the only ones that we can actively access in hardware without doing anything if we happen to touch a green page we're good and we can go forward okay if we have um a page fault it would be because the page we're looking for is not in the green area now it might be in the yellow area and if it's in the yellow area what we're going to do is we're going to pull the page from the yellow area into the green area just by reassigning it's what categories in and enabling the page table to allow it to be used otherwise we'll pull off of the disk okay now we can make a better approximation to lru was asked about having multiple use bits the problem is it's not really easy for the hardware to have multiple use bits but as was also mentioned in the chat you should take a look at the nth chance clock algorithm which gets you closest to lru so let's look at this one now so um basically what happens is uh full speed for the green ones we get page fault on the yellow ones but we don't have to pull it off of disk and last but not least are the pages that are on disk and so if you notice what happens here is if we have a page fault we uh take the top green page and we put it at the end of the lru list and now we have to pull the page that we're looking for into the green list now if we're lucky enough and it's in this second chance list we can immediately pull it out in the middle of the second chance list assign it to the green list and the end and we're done and we can return and start executing and notice that the yellow again is being handled as an lru list because we put things new pages on one side and we pull them out of the middle and so we know that the one on the end at the very top here is the the oldest in the uh yellow list okay and so if the page is not in the yellow list we have to pull it off of the disk and so we pull it off of the disk and put it at the same spot in the green and at that point we're going to throw out the oldest page from the yellow okay and so this is now a an approximation uh that gets us an old page to throw out which is this top yellow one and is uh and sort of has the same purpose as the clock algorithm and this was designed in an architecture namely the vax that didn't have a use bit in hardware all right okay good great so um the other thing i kind of pointed out is the way we introduced the clock algorithm was that every time you had a page fault you'd run the clock algorithm to find a page well of course the problem with that is many fold not the least of which is that it means that you can't actually start paging in off of the disk until you find a page to throw out and so the disk we know is going to take a really long time so we want to get started as soon as absolutely possible so instead of uh basically running the clock algorithm when we have a page fault what we do is we just keep a free list and the free list is some number of pages that are ready to be reused and they're like the second chance list okay so they're not mapped i should really make these yellow i guess but i like the red and green combination here but call this a second chance list and we have a demon called the page out demon which works its way around trying to find enough free pages or enough old pages to put on the free list and at the same time we can also have ones that happen to be dirty we can write them out to disk and so that by the time we get to the head of the free list this page is not dirty and ready to be reused okay and it's just like the vac second chance list except we have a clock for the active pages and a second chance list for the free list and why do i say this well if you happen to have a page fault that happens because of a page that's still in this free list we can immediately put it back in the clock ring and reuse it okay so a daemon is really uh basically a kernel thread that's always running um is one way to look at that so uh the operating system starts up some number of threads that are only running in the kernel and they don't have a user half or it's um something that runs uh that started up its startup time in the operating system and it's running with root privileges and uh it's running all the time that's typically called a demon as well all right now call it a background process if you like so now on to where we were at the very end of the lecture so we were talking about this idea of a reverse page mapping so think about page table as forward basically says for every virtual address find me a page and i can figure out if there is a mapping what the physical page is the problem is that occasionally if i want to evict a physical page we've been talking about when you'd want to do that you have to figure out all of the page table entries and really page tables that hold that and the reason this is tricky is because it's possible that for a given physical page there might be many processes that point at it we talked about when you fork processes you have a bunch of page tables that point to the same physical page we've talked about shared memory etc and so uh basically this is a reverse mapping mechanism that goes from a physical page to all of the virtual uh page table entries all the page table entries that hold it okay so it needs to be fast we talked about that last time there's several implementation options options one is you could actually have a page table or a hash table whatever that goes from a physical page to uh the set of page tables or processes that hold that page and you know that that's fine you can build that in software in the operating system it's a little expensive potentially linux actually does this by grouping physical pages into regions and it deals with regions at a time and that uh since there's a smaller number of entries it makes that a little faster okay but the essential idea is to basically go from a physical page to the set of page table entries that hold that physical page okay now on to what we haven't talked about so how do we actually decide which page frames are going to be allocated amongst different processes so we have a physical amount of memory i don't know 16 gigabytes okay whatever it is and we've got a modern cloud server it might be terabytes these days and the question is how do we divide that physical memory up on the different processes uh so that you know i don't know is it for fairness or what what's the question there well um we have many policies this is a scheduling decision so does every process get the same fraction of memory if i have 100 processes you know and i got 100 gigabytes each gets a gigabyte but maybe different processes have different fractions of memory that they need to actually run if you happen to have a process that basically reuses the same page over and over again giving it you know 100 gigabytes of storage is not going to be helpful and it's wasteful somebody else might need that memory okay um it may be the case that we have so many processes running that there's so much memory that's needed that we're spending all our time thrashing and maybe we ought to actually swap the whole process out to give our machine time to run okay that's a desperation scenario okay well the other thing to keep in mind is that every process needs a minimum number of pages and the way to think of that is you've clearly got a page where the current um instruction pointer is you want that one in memory otherwise you won't be able to execute and you want some number of uh of dram pages that would basically uh be the ones that we're currently accessing and you you know if you don't have that you're not going to be able to make forward progress okay um and uh for instance on the ibm 370 uh uh you might actually need six pages to handle the single ss move instruction so there was a question in the chat that won't don't we just figure this out dynamically the answer is mostly yes except there are a minimum number based on the architecture of pages just to guarantee forward progress of one instruction to execute okay and it's not about full associativity in this case it's about making sure because remember we have hundreds of processes it's about making sure that every given process has its minimum number so that when we go around to scheduling it we actually can execute okay so we could when we're ready to replace a page we have a couple of options so what do we mean by replacing a page it means we have a process that's trying to run it needs a page that's out of memory where do we get the memory from now we can use the clock algorithm in a global sense which is what we've kind of been talking about here right we have everything in the same clock uh algorithm and the same clock data structure and the process just gets a replacement frame from the set of all frames and um you know whatever process loses it loses it okay so that is uh often done that's a very common policy is basically all of the pages are in the same boat and they just get replaced using the clock algorithm another thing that you might imagine in which some operating systems do to be more fair or if you have a real-time operating system maybe you do this to make sure you meet your real-time goals is that each process selects uh from its own frames so you you assign physical memory to the processes and then when a process runs out of memory and needs to page in something it picks one of its own pages to put out okay so in that scenario you could have each process has its own clock algorithm to choose which page of its own is an old one and then we need some policy now to decide how to divide the pages up and maybe we dynamically choose a number of pages per process okay and that would be a local replacement policy with a um some policy for dividing the memory up probably dynamically okay so let's look at a couple options here so one option is that every process uh gets the same amount of of memory and so this is a fixed scheme so for instance you have 100 frames of physical memory five processes each gets 20 frames another might be a proportional allocation scheme where um the bigger process the one that has the most virtual memory needs gets more memory and we could allocate this uh with some proportionality constant right so um perhaps s sub i is the size of process p sub i inter total virtual pages on the disk and so then what we do is we uh say well what's s i over the sum of everything times the amount of memory i got and so that fraction goes to that process can anybody think about why although this might sound good this might not be a good plan okay we have malicious programs and abuse but let's assume for a moment that uh this is not about maliciousness those those are perfectly good uh answers yes i like i like this next point here so basically the size of the process is the size of the code all right and what's uh and so in that sense if you take the binary and you uh link it and you look at the the size of the binary on disk uh and that would be this proportional allocation scheme why why is that probably not indicative of the number of pages that this thing actually needs to execute properly anybody think of any good reasons okay so everybody's kind of uh on the chat is basically getting um the right idea and the right idea is this you know when you think about it today's uh programming we we link in these huge libraries that pretty much um you know they're they have a lot of features to them but we only use some of the features and so the size of the code may have no reflection on the amount of code we're actually using at any given time so you could have a really large process which is really only using a small amount of code and this proportional allocation scheme wouldn't do the right thing okay another thing obviously that you could do is a priority allocation scheme so basically it's proportional but with priorities rather than size and so um the higher priority uh processes get a choice of more pages to use okay and so the idea might be if a process generates a page fault pi you select a replacement frame from all the processes with lower priority so the question in the chat somebody had said oh dynamic linking is a reason that this proportional allocation might not work and then the question is why does that have something to do with it and the answer is well when you uh when your program starts running and it dynamically links a bunch of libraries we talked about that briefly what you're doing is you're essentially attaching to libraries that are already in memory and now all of a sudden you've got uh now all of a sudden you've got a much larger process because you've uh linked in all of those libraries right and so that that might contribute to what you were considered as your total size and notice by the way that dynamic linking is not the only thing here if we just statically link a large library that'll increase our size as well okay so maybe the problem with these schemes is these are kind of fixed they're trying to do something based on static properties of the process and maybe it'd be better to do something more adaptive okay so what if some application just plain needs more memory and some other application doesn't need more memory maybe we ought to listen to that okay and how would we tell what would be a clean a clear sign that a process needs more memory anybody have an idea page faults lots of page faults what might be a clear sign that a process doesn't need as much memory as it's got okay i see i see a bunch of people saying no page faults now you're never going to get no page faults but i would say low page faults right so the the number of pages the number of page faults is small relative to some process that really needs them which has a high page fault rate so we could see relative to each other that perhaps we could reallocate some of our memory and it might be a better idea there okay and so so the question might be could we reduce capacity misses now if you remember the three c's right um capacity misses are ones that happen because uh we don't have a big enough cache or in the case of page faults that process doesn't have access to enough memory and so in this case what we're going to do is um figure out how to dynamically assign okay and we could imagine that there's something like this okay so we have the number of physical frames we give to the process on the x-axis the number of page faults on the y and you could imagine a lower and upper bound which is where we want to be so not so low on the page fault rate that we're just using memory uh in a way that's not helpful and certainly not so high because we are going to be thrashing and not making progress but maybe we want to be in this narrow range here between lower and upper and so as a result if if the number of page faults is above the upper bound we know we really need more memory and if it's below the lower bound it means that maybe we could give up some of our memory and we wouldn't notice too much okay and so this this is a specification for a policy to assign page page rates okay um of course what if we just don't plain have enough memory so that we can't get anybody below the upper bound then what okay so we don't have anybody in the lower below the lower bound to take pages from to help with the upper bound what do we do yeah and then you cry somebody said right buy more buy a better system yep or maybe you swap out enough pages uh swap out enough processes so you basically take a running process you put it completely on disk thereby freeing up memory um so that the remaining ones can run fast enough and then pull the process back in off of disk and run it okay because when you're in this region of with a high fault rate what's happening is the overhead's so high you're not making progress and you're doing a whole lot of swapping in and out okay and so the only thing your machine is doing is swapping and it's doing it really well and it's doing it really rapidly okay whereas if we if we take several processes and put them out on disk to sleep entirely we free up memory then we can get into this uh better region where we're more efficient and we're actually going to be running much faster on the remaining processes we can complete them and then start pulling things back in okay so this is a situation where swapping can make a big deal now there was a question about how we set the lower and upper bound so what's going to happen there is really based on previous experiments on your operating system you can kind of figure out that things above the upper bound are really not making progress and things below the lower bound are uh really don't need their pages the upper bound one you can kind of figure out if you look at the overhead of swapping uh you can kind of figure out what's that break break even point at which uh you know you're doing you know 50 50 half swapping half regular perhaps that's an upper bound or somewhere in the middle here that you don't want to exceed okay so here but the word frame by the way um is the same as a as a physical page sorry if that's a confusing term there okay so frame is a physical page all right so thrashing is a situation where you just plain don't have enough pages and yes if you could somehow uh buy more memory you might help but um in fact if you take a look here on the x-axis on this this uh graph what i've got here is the number of um threads that are simultaneously running so you could i got this as degree of multi programming this could be the number of processes it could be the number of threads that are all simultaneously running and the interesting thing about this is as you increase the number of threads your the fraction of the cpu that you're using starts rising so at some point we have enough threads to keep the cpu busy can anybody tell me why adding more threads even if you have only one cpu might give you higher utilization of the cpu why does it even make sense that this goes up okay because if you think about so there was something here somebody said there you go somebody said less blocking on io correct all right so the thing is that um it's not that there's less io what's going on is we have even though we have threads that are blocked on io we have other threads to run and so we're good to go okay and so this is helping us overlap computation and uh and io okay now um at some point you hit the thrashing point where the number of threads you've got is just way too high and you're doing nothing but overhead and what you get is this precipitous loss of performance okay so it's not just that this level's out but that it just gets bad and everybody does poorly and that's because you're spending all of your time going on and off of disk and disk of course is extremely expensive and thereby nobody is making any progress okay so thrashing is a situation where a process is busy swapping pages in and out with little or no progress okay so the question is how do we detect it what's best response to thrashing well clearly we would detect it uh by there being just a very high rate of uh i o going on or excuse me of um paging going on in fact you could even detect that the amount of time you spend paging versus the amount of time you spend executing far more paging okay when you're in that situation you're clearly thrashing and the best response in that situation is really to basically stop some processes put them out on disk and let the other ones make forward progress and you'll do much better okay okay the reason that more threads lead to more paging is because they're going to have more unique memory requirements and therefore you're going to have a lot more paging okay all right the other thing is why does io help us here the answer is it's if you have a single thread and it's doing bursts of io followed by burst of computation then when it's doing the i o it's getting zero cpu utilization so you want to make sure you have enough threads left over so that somebody can always be computing while the rest of them are sleeping on i o okay and you might the choice on which ones to page out that's a good policy question all right maybe you pick the one that's got the most pages so the other ones can run all right or you there's several different policies you can imagine there so let's talk a little bit about the needs of an application okay so the needs of an application or a process or a thread is based on its uh memory access okay so if you were to take we we looked at this couple of lectures ago if you were to take a look at the the memory address space on the the um y axis here and you look at time on the x what you see is every vertical slice represents the set of pages of the set of virtual addresses that are actively in use right so we could scan across for any given point in time little window in time and we could look at all the addresses that are in use and that's actually our working set so those are the pages um that have to be in memory during that given time period in order to make for forward progress okay now um so one of the answers to what does a process need to make forward progress is it needs to have its working set of pages in memory and notice by the way let's back that up watching that cool yes so if you were to look at any given time uh slice what you'd see is the set of pages in that given time slice is different than the set of pages a little later okay so if you look here is a region where the memory addresses in this region are in high use but they're not in high use for the rest of this execution time so only when we're in this region do we need those pages in and so our working set's changing over time and we want to make sure at any given time that the total working sets of all the processes that are trying to run or threads that are trying to run can fit into memory and if you if the total memory you need for the running threads is bigger than will fit in your physical dram then you got thrashing okay so the working sets the minimum number of pages so if you don't have enough memory then what well better to swap out processes at that point and the policy for what to do um you know there are many policies you could come up with the bottom line is trying to free up enough memory that things can make forward progress okay so here's a model of the working set which roughly corresponds to this blue bar i showed you in this previous slide so the blue bar says if we take a look over a period of time window from you know delta to delta plus something and i'd look at all of the addresses in that range that's the working set at that given time period okay and so here the working set at time t one is really uh going back a delta period what is the total set of pages that are in use and i could write those in set notation pages one two five six seven are in use and those are the pages that need to be in memory okay if uh you look at this other time set uh work excuse me you look at t2 then you see that there's a different set of pages three and four okay now um so the working set window is a fixed number of page references for instance you might be the last 10 000 instructions that defines a working set and those are the pages that have to be in memory in order to make forward progress and so this is actually a model and you can imagine that if delta is too small it's not really encompassing what i need to run okay and if it's too large it's not going to meet up with the different periods in the program so if delta is too big so that would correspond to this blue bar being too wide then i would mistakenly think that i need all of those pages as well as all of these other ones if the bar was too wide and so it needs to be kind of narrow enough to reflect the changing patterns of the working set over time okay and of course if delta is infinity then um you're encompassing the entire program and this isn't really a useful model other than to say well here's all the addresses that the program uses right that doesn't have enough of a time component to be helpful okay so this is a good question in the chat won't we give a lot of memory right as processes change their working set so the answer is really um that as if you look at the clock algorithm what happens is that dynamically adapts uh so as the working set changes what really happens is uh the old pages aren't the active ones and i bring in new ones if i want to be more sophisticated about what's going on here and i see a changing working set then what i'm really saying is i'm never going to have more pages than fit in that say 10 000 instruction scheme and if i'm really going to build a paging scheme based on that then as i go through what really happens is i sort of say oh gee those pages i had before i don't need anymore but i need these new ones and you could let those whole pages be used by some other process that's getting some new ones okay um so the page faults uh you know this is kind of averaging over time so as you move forward the page faults aren't going to get any faster than they would otherwise just by this model this is really trying to model what pages we need to have in core to make progress and if you were to add up all the working sets for all of the running processes then you get an idea of how much total memory you need how many total frames and that gives you an idea whether you're in a frashing situation because d is greater than the total memory you've got okay so the policy sort of is if the demand is greater than m then you suspend or swap out processes until you can make forward progress and here the word swap when i say swap out a process that means put the whole thing out on disk and free up its physical pages so that other things can use those physical pages now m here is total memory okay so m is what i've got available for my memory spot of dram now let's talk a little bit about compulsory misses so compulsory misses are misses that occur the first time you ever see something um this might be the first time you ever touch a page um or after the process is swapped out and you swap it back in all right this could be um the uh this could be a source of compulsory misses after a phase where you've pushed the thing out um so the question here are demand phase frames basically page faults right now um if we're doing demand paging what we're saying is we bring a page in as a result of a page fault so demand paging is the same as pulling something in dynamically as soon as it's needed the the reason for looking at the working set that we've done is one to give us a better idea how many pages we really need but two it can actually lead to a slightly more intelligent paging in okay so um you could say that uh we could do something called clustering which some operating systems do which says on a page fault what you do in is you bring in multiple pages around the fault faulting page that's a form of prefetching and um since the efficiency of disk reads increase with sequential reads which we'll show you as soon as we get to disks uh it makes sense maybe to read several pages at a time rather than just the one that you page faulted on so that's a way on a demand page miss to pull in slightly more pages than we're asked for as a way of trying to optimize our page faults and our compulsory misses okay lower than compulsory misses the other is actually to do a real working set tracking which is to try to have an algorithm that figures out what the current working set is for a given process and when you um swap the process out and then bring it back in maybe you just swap in the working set as a way to get started and thereby avoid the compulsory misses okay now um let's look a little bit about what linux does so memory management in linux is a lot more complicated than what we've been giving of course but um it is interesting to take a look at what they've settled on so among other things linux is uh has a history that tracks some of the history of the x86 processor and so linux actually has at least three zones it has the dma zone which is uh memory less than the 16 megabyte mark originally these were the only places where dma worked well on the isobus i'll say more about dma uh in in a couple of slides or in a few slides but this is the direct memory access there's a normal zone which was everything from 16 megabytes to 896 megabytes okay and this is uh all mapped up at c00 for the kernel i'll show you that in a moment and then there's high memory which was everything else okay every zone has its own free list and two lru lists which is kind of like they each have their own clock okay many different types of allocators okay you've started looking in in homework four you've been looking at ways of making malloc and so on well if you look inside the kernel there's several different allocators so there's things called slab allocators uh per page allocators uh mapped unmapped allocators there's a lot of interesting things there there's many different types of allocated memory so some of it's called anonymous which is means it's not backed by a file at all um some of it's backed by a file so once we get talking about file systems a little more we'll we'll uh look at some of these uses of memory there's some priorities to the allocation is is blocking aloud so if you're if you uh remember we talked about how things like interrupts aren't allowed to go to sleep uh because the interrupt has to be short okay well blocking that's going to sleep miter might not be allowed in your memory allocator so if you can imagine you have a colonel malik one of the things you need to tell it is if you don't have the memory i'm asking for are you allowed to put me to sleep or not if you're in an interrupt handler the answer's got to be no because if it puts you to sleep you basically crash the machine on the other hand if you're coming in from a process maybe getting put to sleep is okay so that's the difference between blocking or not blocking and the allocators inside the linux kernel have to make that distinction okay so here's a couple of uh interesting things i want to show you so this is pre-meltdown i'll say a little bit more about meltdown in a second but back at a couple of years ago we basically had a 32-bit address space looked like this so there was three gigabytes for the user and another gigabyte for the kernel and what this is is the kernel would map not only its kernel memory but also every page up to 896 megabytes were also mapped up here okay and then the user space had up to three gigabytes of virtual memory that it was allowed to use now what's interesting about this is of course what's in red isn't available to users so if users try to use this um you get a page fault and ultimately a core dump but as soon as you went from kernel or excuse me as soon as you went from a user to a kernel like by a system call these addresses are already mapped in the page table and they're ready to use okay so you know all of the kernel code is up there all of the interrupt handlers all that stuff and uh every page in the system is up there all of that's available for immediate use as soon as you go into the kernel okay when you get to 64 bit memory which is considerably bigger so notice that we only have 32 bits of a virtual address here we have 64 bits of virtual address it has a similar layout but basically 64 bits give you a lot of memory so much memory that uh nobody has that much dram yet okay and so you not only have don't have that much dram you don't really have that much virtual memory even and so what happens there is even though in principle you could map any virtual address to any physical address what happens in real processors is there's actually uh what's called the the canonical hole in the middle okay and that really reflects the fact that the page table only works up to say 48 bits of virtual address and notice the the idea here is that you'd have 47 uh ones you know from all zeros to 47 ones gives you the user addresses and then at the top of the space from all ones down to uh 47 zeros gives you the kernel addresses and then everything in between is basically not assignable so any uh any attempt to touch that part of the virtual space would cause a page fault okay and so this layout really reflects the fact that you don't even have all 64 bits worth of virtual addresses now somebody kind of joked in the chat there that yeah we don't yet have uh 64 bits worth of physical memory but um yeah someday probably will happen there's already people talking about 128 bit processors i mean those exist so i don't know things keep getting larger okay so let's look a little bit more about what we had here okay now um if you look again what's great about this arrangement is that every page is available um every page is available in the kernel and up in this space and um you know all of the kernel code and everything's available up in this space and so it really makes it easy for the kernel because it can touch any page it can touch any of its code um and it can basically manage those pages easily okay and one of the things is that in general those red regions are just not available to the user there's a couple of special dynamically linked shared objects that are available to the user and those are moved around randomly every physical page has a page structure in the kernel they're linked together into the clock and they're accessible in those red regions for 32 megabit architectures as long as you have less than 896 megabytes then every page not only was in some user's page table but it was also available in that red region up there for the kernel to to touch so it actually had double uh double mappings okay and then for 64 bit virtual memory architectures pretty much all the physical memory is mapped above that fff 8 range okay so this 896 megabyte number comes from having enough space up in that red region to map 896 but leave some extra space for the kernel and for a few other uh specialized addresses okay so needless to say the kernel's only got a gigabyte up there so you can't map four gigabytes into one gigabyte that wouldn't work and it turns out 896 megabytes is the max you can get uh above c 00 because that's just the way linux does it so meltdown happened okay so what was meltdown meltdown let's go back to this map so sometime in 2017 2018 basically uh the computer architecture community was shocked by something called meltdown and what it was was it was a way that was demonstrated for user code to read out data that happened to be mapped but invisible uh in the kernel okay so even though these page table entries were marked as kernel only the fact that they were in the page table at all even though they were marked as unreadable meant that using the meltdown code you could read data out of that and it was actually demonstrated that you could um with user code read all of the data out of the kernel which means that you know secret keys and all that sort of stuff was all vulnerable okay which as you can imagine was not a great thing for people right and so the idea here is is using speculative execution now what you got to realize is modern processors take a bunch of instructions and they execute them out of order and a way to make everything fast okay and so they run them out of order and they even allow things to run ahead and do executions that aren't allowed and the reason that's okay is because any problems are eventually discovered and all the results are squashed and it just works okay so the if you were really interested in this i highly recommend you take 152 it's a lot of fun to learn about why this out of order execution works but uh the key thing here to to first of all keep in mind is yes things are executed out of order and they're executed in parallel and what have you but and they're allowed to temporarily do things incorrectly but when all is said and done it's all cleaned up at the end so the registers never reflect incorrect execution or violating of priorities or kernel uh privileges or anything and so nobody in the you know computer architecture community really thought that this was going to be possible okay and what they didn't realize was that you could do something like this where you set up the cache okay you have an array at user mode that's why it's green it's got 256 entries times 4k a piece which is a page size and you flush all the array out of the cache so this all of these um cache entries in the array are now gone and then what you do is this following code and i just want to give you a bro a rough idea you say i'm going to try something this is not quite c but it's close i'm going to try to read a kernel address that i'm not supposed to okay so it's up in that red region and i'm going to try it okay and then i'm going to take the result that i read out of it and i'm going to use that to try to read out of this array which i have access to okay so i'm only going to get one byte out of the kernel i'm going to use it to access something in the array and then if i get an error which of course i'm going to get an error because i'm reading kernel code it gets caught and no and it's ignored okay and why does this do something well this does something because the the processor is is running all of this stuff ahead in its pipeline it goes ahead it does the read early it it accesses the cache early and then it says oh you weren't supposed to do that and it squashes all the results so the registers don't have anything in it but i have touched the cash and now the cash has got an entry in it depending on what the value was i read back so one of 256 cash lines is now in memory in cache and so then all i have to do is scan through and find the one that's actually cached and fast as opposed to all the other ones that go to memory and voila i just read eight bits out of the kernel okay and this is this was shocking okay what this did was it took the out of order execution which is there for performance and it suddenly gave you the ability to read stuff out of the kernel uh that you weren't supposed to touch okay questions it takes a little getting used to it but it's astonishing that this is possible okay and let me just say this again the idea here is i try to read a byte out of the kernel which i'm not supposed to the processor is pi heavily pipelined so it goes ahead and reads it anyway i use that result to touch which or try to do a read from cash in one of 256 values and all of this stuff gets squashed because the process says oop that's not something you're allowed to do but the damage has already been done because i've already tried to read into the cache and as a result one out of 256 entries in the cache has a value in it and i can figure out which one through speed by just saying oh that one cache entry is fast the others are slow okay and as a result you can work your way through and read out a memory so this is bad okay and in particular it's bad because all of the kernel uh address maps that everybody had all of these years with kernel mapped stuff up in the upper portion i just showed you that all of that red up there right okay this type of layout had been around forever extremely convenient because basically the page table has got everything in it but it's only until you go into the kernel that these kernel addresses are allowed to be used suddenly you couldn't do that anymore because it uh it opened you up to the meltdown bug and so post meltdown there's a whole bunch of patches that came in that basically involved no longer having one page table but really having two for every process one that's used in the kernel and one that's used for the process and that meant that you had to flush the tlb on every system call okay in order to avoid the bug except from processors that actually had a uh a tag in the tlb that would tag based on which um which page table you're using and only uh versions of linux after 4.14 was able to use that pcid so this really slowed everything down okay and okay and the fix would be better hardware that kind of gets rid of these timing side channels and there have been fixes kind of on the way for a while and they're starting to get better um so the reason the processor does what we're talking about here is really to speed everything up because you want as much pipelining as as possible and um that this the checking of the conditions takes a lot of time just like the access so it starts the accesses early okay and it is it's mostly fixed okay it's mostly fixed but it's still a little bit uh surprising that this was possible at all okay okay yes you are understanding this correctly okay all right so let's uh let's switch gears a little bit um but anyway the reason i wanted to bring this up is a it's an interesting bit of very recent history and b it uh it actually changes what memory maps are allowed now and if you're wondering um why things are not as clean as they used to be it's partially due to the meltdown memory map okay so now we're going to switch gears we're going to talk about io and um if you remember uh you know we've talked a lot about the computer and data paths and processors and memory we really haven't talked about this uh input output issue yeah pintos is potentially vulnerable to this problem but pintos is not a commercial operating system so um so uh why is io even interesting uh and the answer is really uh without io a processor is just like a disembodied brain that's busy just computing stuff and you know of course we all know that all processors auto aspire to computing the last digit of pi but presumably it'd be nice if we were able to get the answer out okay and so um what about io now there is question is i o and scope so general i o basically i think i said everything up to today was potentially in scope um so without io computers are are useless and uh the problem though is that there's so much i o right there's thousands of different devices there's different types of buses so what do we do how do we standardize the interfaces on these devices and the thing is the devices are unreliable media failures and transmission errors happen and so the moment we put io in here our carefully crafted virtual machine view of the world suddenly gets very messy and we need to figure out how to standardize enough of the interfaces across all these different devices so that we can hope to program this okay so how do we make them reliable um you know because there were lots of different failures how do we deal with the fact that the timing is off they're unpredictable they're slow how do we manage manage them if we don't know what they'll do or when they'll do it okay all of these different things and really um philosophically i like to think of this as the fact that the the world which is what io touches is is really very complicated and um computer scientists like to think in some simple ways and nice abstractions and uh when the nice abstractions collide with the real world uh you get problems okay you get you get the the fake news shows up right and so we got to figure out what to do about this and so if you remember we kind of said what is i o well io is all of these buses it's the networks it's the displays and we somehow have this nice clean virtual memory abstraction of processes and stuff virtual machine abstraction excuse me above the red line and you know storage uh we have to access the binaries we have to access our networks across that protection boundary and all of the i o is both below that uh kernel boundary of processing and potentially out into the real world and hopefully the os is going to give us some sort of common services in the form of io that we can then access without caring so much about the exact precise details of the world and the other thing is of course the the jeff dean range of time scales where cash replacements might be 0.5 nanoseconds all the way up to you know the time to send a packet from california to the netherlands and back might be you know 150 milliseconds there's a big range and so whatever we do uh it's likely that um we're gonna need a whole a whole range of techniques to deal with all of these different time scales okay now uh so let's go and think about this a little bit more um if you look at uh the device rates varying over 12 orders of magnitude here's the sun enterprise buses these are all different uh devices that are actually on those buses the system has to be able to handle this wide range so you don't need uh you don't want to have high overhead for the really high speed networks or you're going to lose packets but you don't want to waste a lot of time waiting for that next keystroke which is going to take a long time okay so in a picture what do we have we have our processor which we've been focusing on pretty exclusively say this is a multi-core machine which is each core has registers an l1 cache and an l2 cache and then those cores share an l3 cache okay and that's our processor and then we've got to deal with the i o out here and what you can see is the i o devices are supported by i o controllers for instance here and those i o controllers provide some standardized facilities to talk with the outside world and then there's various wires and so on that communicate okay and this these interfaces are the things we need to figure out how to make work okay and and you know right for instance if you were gonna pull something off of ssd you're going to put commands into the i o controller which is then going to reach out across a standardized bus start the read off the ssd which will pull it through dma into dram and then you can read and write as a result once it's in dram and so there's a lot of different interesting pieces here that we're going to have to figure out okay so dma writes to um that's a good question in the chat does dma write to physical addresses i'm going to say yes for now although there are virtual dma protocols that can write into virtual memory as well but usually you pin it into physical memory before you start dma okay so here's another look at a modern system so you got the processor with its cache and then you've got various bridges to pci buses for instance and then maybe you have a scuzzy controller that talks to a bunch of disks or maybe you have a graphics controller which talks to monitors or maybe you have an ide controller which talks to a slower disks etc and it's really all of these different buses are part of the i o subsystem as well okay so what's a bus so it's a common set of wires for communicating among hardware devices and there are protocols that have to be uh satisfied on these wires so operations or transactions include things like reading and writing of data control lines address lines data lines uh have to be part of this bus so it's typically a bunch of wires okay and you have many devices that might be on a bus right so this is a standard abstraction for how to plug and play a bunch of individual things onto a common bus that then can get to your processor okay and so there's protocols um there's an initiator that starts the request there's an arbitrator which says it's your turn to actually talk um there may be handshaking to make sure that no data is gone uh before you can grab it so the communication's only as fast as permissible um there's also arbitration to make sure that two speakers don't try to speak at the same times etc okay now the closer we are to the processor typically the wires are very short we can get very high speed uh communication the farther away from the processor the wires are longer or you go through more gateways and the communication gets a lot slower so we you know things that need to be really fast or typically close to the process or things that uh maybe need to be more flexible or often further away but slower so why do we have a bus well the buses in principle at least let you connect end devices over a single set of wires so buses came up over the long history of computers as a way of allowing us the maximum flexibility to plug in many devices okay now of course you end up with n squared relationships between different devices on that bus which can get messy very quickly the other thing is that several downsides to a bus so one is that you can only have one thing happening on a bus at a time and that's because everybody has to listen okay and that's where the arbitration part comes into play the other downside which i'm going to point out here before we leave the bus is the longer the wires the longer the capacitance the slower the bus is because capacitance takes a long time to drive up and down i don't know if you guys talked about that in 60 61c but basically if you have a really long bus and a lot of capacitance it means to change a wire from a zero to a one you have to charge it up and the more capacitance the the longer that takes okay so buses that get too long get slow so that kind of explains part of what i'm about to say next which is here's an example of the pci bus you've probably taken a look inside of one of your computers you can plug a card in it's got many parallel wires representing 32 bits of communication or what have you a bunch of control wires a bunch of clocking wires and this is a parallel bus because all of the different card slots are all connected together with a common set of wires okay and so what i showed is an arrow back here each one of these slices might have another one of those connectors on it that would connect across um you know tens or hundreds of wires in that bus okay and so not only is there a lot of capacitance in this but the bus speed gets set to the slowest device so if you have a device on here that responds very slowly then everybody suffers okay and so what happened is we went from the pc bus to for instance pci express and some of these others in which it's no longer a parallel set of wires but rather a bunch of serial communications that all tie everything together and act like a bus but is really a bunch of point to point okay it's really a collection of very fast serial channels devices can use as many lanes as they need to give you the bandwidth and then slow devices don't have to share with the fast ones and so therefore you get the expandability of something like a bus but the speed of a single point-to-point wire set of wires between each device okay and one of the successes of some of the device abstractions in linux for instance is going from pci bus the original parallel bus the pci express really only had to be reflected at some of the very lowest device driver levels most of the higher levels of the operating system never even had to know the type of device so that's a good example of abstraction coming into play here to help deal with the messiness of the real world so here's an example of a pci architecture you know you have your cpu you've got a very uh short memory bus to ram so these are typically a bunch of uh what are called single inline or dual inline modules and they're connected on a bus that typically is connected very short wires directly to the cpu okay and so that can be blazingly fast and then the cpu typically has bridges to a set of pci buses and these are serial communications and plugged into the pci bus for instance would be a special bridge to the original industry standard architecture bus so this was on the original ibm pc was the isa bus what happens in a modern system is you fake it by having a fast pci express bus but the isa controller can talk to legacy devices like old keyboards and mice and so on okay and also though you might have bridges between different pci buses and now typically you have usb controllers uh where usb is actually a different type of serial bus um and uh that has a set of root hubs and regular hubs and this is a webcam keyboard mouse those can be plugged into usb which is plugged into pci which is plugged into the cpu okay and then you can also have disks and so on so this is a view of the complexity of the bus structures uh but all of this gets hidden behind proper device drivers so that the higher levels of the kernel don't have to worry about some of this complexity only the lower levels okay the question is is this parallel or serial the answer is yes okay now um so basically uh when i say pci i'm talking about pci express is serial pci bus is parallel um depends a lot on what parts of the system we're talking about but basically the the serial communication for pci express is uh far more prevalent than the uh the parallel ones these days and it's gonna depend on um your exact system so you can you can uh open up some of your uh specs that talk about your computers and see kind of what the buses are inter internally okay now how does the processor talk to a device so i wanted to start our conversation here a little bit about what it is that's inside the operating system uh that talks to to uh devices and so we already talked about the cpu might have a memory bus to regular memory okay and so that's a set of wires that typically the hardware knows how to deal with directly okay um on the the memory bus or possibly directly connected to um parts of the cpu okay we'll talk about that a little bit might be a set of adapters okay and those adapters give you other buses and i'm we're not going to worry exactly what the buses are here but what i wanted to show you is that typically the cpu is trying to talk to a device controller this big thing in magenta and that device controller is the thing that has all of the smarts to deal with a specific device it gets plugged into the right bus interfaces in a way that the cpu can send commands to that device controller and read things from the device controller okay and some of that communication might be via reads and writes i'll show you this in a moment um of special sort that basically go across the memory bus or across a bus to the device controller and and set registers to control its operation or pull data or start dma we'll talk about that in a moment also coming out of this is typically interrupts that go to the interrupt controller now we already had the discussion about interrupt controllers earlier in the term but one of the ways that the device controller typically says that it needs service or that something has been completed is over an interrupt okay so the cpu interacts with the controller typically contains a set of registers that can be read and written so what i've got here for the registers are ones that potentially allow you to read and write things about the device maybe set some commands like for instance if this is a display maybe one of the things you might write to the second register is uh about the resolution okay now the device controller the question in the chat is is this the same as the device driver no this is hardware device driver is running on the cpu and the device driver knows how to talk to the device controller hardware okay so the device controller this is actually hardware okay and so if you look here um for instance we might have a set of registers that have port ids on them i'll show you what that means in a moment but for instance port 20 might be this red the first register port 21 might be the second port 22 might be a control register port 23 might be status and by reading and writing those ports i could change the resolution of the device the other thing is i can read and write addresses okay and reading and writing of addresses allow me to potentially write bits directly on screen okay so there's two different types of access that are typically talked about between the processor and the device controller one is port mapped i o where the cpu uses special in and out registers that address ports in the controller okay and that's the special register names and the other is memory mapped io where just by reading and writing to certain parts of the address space i cause things to happen on my device and so i want to talk about port mapped i o and memory mapped i o so about port mapped i o is port mapped i o is typically only shows up on things like the x86 processor or very specialized processors that have i o instructions memory mapped i o is much more common where you can read and write from special memory addresses and it just goes to the controller okay now region here is uh what region of the physical address space can i read and write to that's going to cause things to happen here i'll show you that in a second now here's an example if you were to go into devices speaker.c in pintos you'd actually see something that turns the speaker on into frequency and off at a frequency and what it says here uh is it's going to do some stuff and talk to hardware and it's the thing i wanted to point out is these out b instructions okay which there's a special code for that that really compiles to um you see the assembly instruction inside of this routine actually runs an instruction called out b and what that out b is is that's a an i o instruction that runs to a that writes to excuse me an address port that's going to touch the speaker okay and there's also a corresponding in b which is another instruction so these are actually native instructions for the x86 processor that takes a port number and some data and accesses that io device and these port numbers um typically are 16 bits or they can be 32 bits under some circumstances but they're they're small uh a small address space for i o okay the memory mapping is a little different idea okay for memory mapping we have uh this is our physical address space where if you keep in mind obviously there's going to be big regions that have dram in them for the physical address space but when you have a device uh plugged into the system you can have regions of the address space that actually talk to that device directly so if i happen to have reads or writes to this part of the physical address space what i'm going to do is write commands into a graphics command graphics command q which might for instance cause triangles to be drawn on the screen if i'm doing some cool three-dimensional rendering okay or if i read and write this region of memory i might actually put dots on the screen and then there's another region which might be commands and status results where just by reading and writing the addresses in that region i get back status or i cause commands to happen so um in the example here might be that if i were to write dots on the screen i just write to display memory and it'll just cause i can cause characters to show up there by writing the right dots right or if i write graphic descriptors i mentioned here this could be a set of triangles which then i hit a command and that will cause it to be drawn okay now are these addresses hard coded so typically in the really old days these addresses were hardcoded now what happens is depending on what bus this is on like the pci express bus these addresses are actually negotiated at boot time by uh the boot driver this is not in the regular pintos code this would be in the boot driver uh with the hardware over the pci express bus to to decide which physical addresses go to which parts of the of the hardware and the reason this auto negotiation is so good is because that means if you plug a bunch of devices in they negotiate so that there are non-overlapping addresses whereas once upon a time you actually had to set jumpers and stuff on cards before you dared to plug them in so that you didn't have overlapping addresses for your different devices okay all right questions about memory mapping versus port mapping there's a good question there so the good question on the chat is so is data getting written to memory and then the device controller reads it or does writing to these addresses just then directly to the device it's the latter okay so you don't put it into dram and then have it go into the uh the controller what happens is the active writing doesn't go to dram it goes to the actual controller okay now what you can do uh so the question here is why wouldn't they use virtual addressing to solve the negotiating the problem is you need an actual physical address on the bus and then you can virtually map to it so if your physical addresses overlap then you got a problem think of this like we've been talking about dram is our physical dram space if we had different dram cells that map to the same physical address all chaos would happen right so we got to make sure that um the physical addresses that are dealt with in the cards are all unique from each other and once we've got that then you can map virtual memory uh parts of the virtual address space to these physical things and then you know you can give command of a device to a user level process for instance just by setting up its page tables the right way to point at those physical addresses but you need to make sure that the physical addresses don't overlap first okay now there's a good question of uh what is faster port mapping or or memory mapping so the answer is uh the memory mapped options are usually a lot faster um under most circumstances this uh this mechanism of using ports is uh kind of a legacy mechanism you often use it only to access uh old devices old school devices or ones that are part of the ibm pc spec okay and the answer is the reason is really that mapping through memory is so much more flexible it's a it's a path that's been set up for large addresses and uh you can actually tell the cache to ignore certain addresses so if you look carefully at the page table entries i don't have it up today but look at it from last time you'll see there's a couple of bits in a page table mapping that talk about not putting the data in the cache and you want that because you want to make sure that all rights go straight through to the hardware and then when you read you don't want it to be cached so that you accidentally get old data you want your read to always go directly from the hardware into the processor okay good any other questions so there might be overlapping so the question is why was i saying there might be overlapping physical addresses imagine simply put two of these display controllers into the same machine okay if we hard-coded where which physical addresses uh were for that card we now have an overlap okay and so that overlap needs to be removed and that's part of the negotiation process for modern buses like pci express and so on now the question about ports is ports are actually a completely separate physical address space from uh the regular physical address space and so the ports uh go via separate um a separate path if you will the data is all the same but the addressing bits say something different they say this is not part of normal addresses this is part of the port port map space all right good now and you can protect this with address translation and where do these usually get mapped in virtual memory it depends the depends on how they're being used so if you're not giving the user the ability to touch a device which you have to be very careful about doing that then it's going to be mapped into a part of the physical address space that doesn't have dram in it and if you take a look at um you know the typical linux memory maps there's going to be some spots often in very low memory for io and also in high memory is another possibility too but um the uh you know it really it's going to depend a lot on the actual hardware that you've got and you know where is their dram where is there not you need this to be in the places where there's no dram okay and each of the buses like pci express and all the others they all have their own spaces that they map into as well okay so i think the right answer to that question is really you don't really need to worry about exactly where in physical space it is just that it gets mapped in physical space and that at boot time we make sure it doesn't overlap with anything else mapped in that same space okay so there's more than just the cpu i wanted to say a little bit about this uh so this is uh for instance sky lake i've talked a little about sky lake but it's got multiple cores you can have like 50 some cores in there okay 52 and there's typically a bus that might be a ring it might be a mesh okay there are a lot of different options each core has a processor in it okay the processor might do out of order execution remember meltdown we just talked about that it might have a bunch of special operations to deal with security and so on um but that's just the processor if you look at everything else here we've got the system agent so that basically talks to uh various uh dram controllers there can that's the imc it can also talk to other chips to give you cash coherence okay and then also there's a gpu in this particular down here the processor graphics which can actually draw on the screen and so on um if you don't have a special gpu in your system and so there's a lot of different pieces in here that are more than just the processor that's kind of my point the processors are very interesting but all of this stuff with the system agent gives you dram gives you display controllers processor graphics gives you graphics and then there's integrated io on most modern chips from intel okay and so that's the memory controller pci express for graphics card so you see um coming out off the display here typically there's very fast pci express options up up top here for other graphics there's also built-in graphics which is uh lower performance but pci express um directly on the same chip okay and so you know like in the old days you had the processor you had other stuff then you had some buses and so on here the pci express control signals are actually coming directly out of the chip and there's this direct media interface for the platform controller hub you see up at the top this typically connects to a lot of other io okay so here is an example where we have the processor and notice this is another view we've got pci express we've got dram that's the ddr we've got embedded displays and so on and then the platform controller hub down here handles pretty much everything else that's interesting okay all right so um the thing to to really learn about this particular slide is to understand the fact that the i o is tightly integrated and that there's a lot of really interesting i o coming off of this okay so the platform controller hub is this chip lots of i o okay usb ethernet thunderbolt 3 bios okay this lpc interface is for legacy things like keyboards and mice and so on okay you don't need to know all of these details but this is trying to give you a flavor for some of the interesting things we have to control okay um so we're gonna um we're gonna finish up here pretty soon but i wanted to cover a couple more things before we're totally totally done so um when you start talking about io and we're gonna go into this much more detail in a couple of days you start talking about things like well do i typically read a byte at a time or do i read a block at a time so some devices like keyboards etc mice give you one byte at a time okay things like discs give you a block it might be 4k bytes it might be 16k bytes at a time networks etc tend to give you big chunks um we might also wonder not just byte versus block but are we reading something sequentially or are we randomly going places so some devices you know tape is an obvious case where you have to do sequential right the others can give you random access like disks or cds okay and in those cases there's some overhead to starting the transfer but then you can pull the data out in large chunks often once you've gotten to that random spot some devices have to be monitored continuously in case they go away and come back some generate interrupts when they need service okay transfer mechanisms like programmed io and dma we're going to talk more about that next time okay these are different ways in which to get the data in and out of the device i showed you the topology earlier with the cpu talking to the controller but now we've got how do how do we actually get the data in and out do we do it one byte at a time in a loop or do we ask for big chunks of data that go out automatically that's going to be something we talk about okay and so really i think i think i'm going to save this discussion for next time so in conclusion we've talked about lots of different io device types today there are many different speeds many different access patterns okay block devices character devices network devices different access timings like blocking non-blocking asynchronous we'll talk more about that next time we talked about i o controllers that's the hardware that controls the device we talked about processor accesses through i o instructions or load stores to special memory as you know there are various notification mechanisms like interrupts and polling we'll talk a lot more about polling next time but you're very familiar with interrupts okay and all of this is tied together with device drivers that interface to i o devices so the device drivers talk to the controllers and the device drivers know all the idiosyncrasies of the controllers and how to make them work and then the device drivers as we've discussed in the past provide a really clean interface up okay they provide a clean read write open interface they're going to allow you to manipulate devices through programmed i o or dma or interrupts there's going to be three types of devices we'll talk about block devices character devices and network devices and so i think i'm going to let you go um i hope to see you in a couple of days we're gonna have some uh interesting stuff about uh devices to be talking about um next time but uh hope you have a good rest of your monday and i hope there weren't too many of you that had the threat of power outages i know that there are parts of uh parts of orinda and lafayette moraga on the other side of the hills that are all have their power out but all right um other people are oh evacuated that's even worse i'm sorry to hear that i hope that you get back to your living situation soon have a great uh have a great evening and we will talk to you tomorrow i mean excuse me talk to you on wednesday |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_5_Abstractions_3_IPC_Pipes_and_Sockets.txt | welcome back to 260 162 everybody i almost said 262. um we are uh out of mars and the upside down it appears because there's actually some non-orange light that's happened today but it's still a bad air quality so that's not great but let's see what we can do today and continuing our topics we're going to be talking a little bit more about the user's view of the system so that when we really dive in to details inside the operating system you'll have a good clue uh why we're doing what we're doing so today we're going to talk about communication between processes we were talking about how to create them uh up and how to create threads but now we're going to talk about communicating between them we're going to introduce pipes and sockets and tcpip connection setup for web servers for instance and the thing to think about here is our mental model here is going to be uh process a on one side of the network talks to process b on the other side and they use read and write just like the file interface okay so the other thing i just wanted to keep everybody in mind here is we talked about creating processes with fork so the fork basically copies the current process all of its address spaces the state of the original process is duplicated in the parent and the child that's the address space the file descriptors etc and what i'm showing here uh basically is giving you a brief way to look at this when fork returns once the two processes have been created fork returns in each of them and in one of them it returns something bigger than zero that's the parent and the other one it returns zero and that's the child and i um show you this on this side here uh once we've forked this is the parent the fcpid greater than zero and it's actually executing a weight which says it's going to pause or go to sleep until the child exits which is this other piece of code with a 42 which is in this case an error so most cases with unix return code of zero is uh what happens when there are no errors so i saw an interesting uh question up on uh piazza i thought i would say something about so the question is why fork i mean if it's really creating two identical processes what's the point and the point is there are two processes where there was one before okay so fork is basically how you create new processes this is mostly true because as i mentioned here linux has something called clone which gives you more options than regular fork but fork was the original mechanism way back in the first versions of unix and so its semantics are partially historical but the question of why fork is really that's the way you get new processes so um last time we talked a lot about the fact that uh in unix pretty much everything's a file okay uh obviously uh you can talk to files with read write you can talk to devices you can do inter-processor communication which we're going to show today but that interface is pretty constant okay and among other things it's going to allow this simple composition uh find piping into grep piping into word count etc that you're um getting used to uh with your programming at user level and you're going to actually implement when we get around to project two uh this particular modality of of communication with the kernel is you open everything before you use it okay and so uh all of the access control checking is done on open and if you get returned something then you know that you were successful in opening the other important thing is that in unix the kernel is extraordinarily agnostic okay it's agnostic to what the underlying structure of the data is that means that everything is essentially bite oriented regardless of whether it's coming off of a disk 4k at a time or off of a keyboard one bite at a time now the question of uh if processors are composed of threads that's forking a process fork all the threads so we answered that last time the answer is no so you've got to be very careful only the only the thread that actually executed fork is recreated in the child process so the other thing we briefly talked about was the fact that colonel buffers reads and writes to give you that byte-oriented uh behavior so it basically takes from the disk it might take 4k or 8k or 16k at a time and it buffers it internal to the kernel so then you can read 13 bytes and then 12 bytes and then 196 bytes without having to go to the disk all the time because that would be extraordinarily inefficient writes are also buffered so when you write you don't have to wait until it gets pushed out to the disk before it returns back to the user okay and then because we had open before use we also have an explicit close operation that uh typically you use when you want to close something out and clean everything up although the kernel uh will do that if your process just ends and you haven't closed things so i wanted to put together a kind of a walking pattern for thinking about today's lecture and this is going to be one process so we're not talking about inter-process yet but it's a web server which you've all used a lot and here we have the standard three layers we've got the user level and notice that even a server is running at user level we have the kernel which is all of the kernel code that's giving the glue and the virtual machine and so on is all done in the kernel and then the hardware of course has got things like networking and disk and so on and so we could imagine that the server process starts up and the first thing it does is it's going to open some sockets to get ready to listen to incoming requests we'll get to that in a moment but notice that that first thing it does is a read and that read goes to the socket and it has to take a system call to do that and the first thing that happens is wait okay why because there's no data yet so that server gets put to sleep or the thread that did this gets put to sleep the server could be multi-threaded as we'll talk about because there's no data and notice we've used read so we're actually going to be communicating with the network in the same way that we did with the file system okay and sometime later data is going to come in from remotely over the network and for instance this might be a request to the web server for reading a certain url it'll generate an interrupt we haven't talked about that yet it'll copy things into the socket buffer and then poof the weight condition is no longer going to be true and we're going to be able to wake up and remove ourselves from the kernel and basically return from reed so this we we went into the kernel with reed but we stayed there for a while and then eventually we returned from read with data and so there's a request and now that request since we're talking about a web server is likely to need to get something off the disk so it executes a read to a file descriptor for the disk uh file system and now it's going to wait a little bit because potentially the disk has to be accessed with the device driver so that may take some time to pull things off the disk and then the disk interface will eventually hand back the requested data which again will remove the weight condition and return from the read system call with data at which point we format the reply like a http reply we go back to our network socket with a write and that again is a syscall boundary which will send the packet out going and notice that we don't have a weight condition here because i'm assuming that the buffers aren't full and the data just goes out and of course after 12 we're going to just repeat and do another read okay and we're going to see that a lot in a little bit of the lecture today we're going to talk more about this network communication thing here how does this work okay but before we get there i did want to point out one thing which is what you see here is uh if you recall we were stalled on our reed both for the network and for the disc for a little while and the kernel took all the responses in from the disk and from uh the network and saved up and buffered them so that we only got returned what we asked for okay so the boxes here uh inside the kernel are slots for bytes or whatever okay think of them this is a generic queue of some sort the case of write means essentially that when we write our data it goes into the kernel and is buffered by the kernel and we can return immediately back to the server to uh do another read if we want to okay now again to remember we talked about both high and low level apis for file data and also for io now here's an example of the high level streams which all have almost all of them have an f in front of them so like f open and f close and f read and f right and they um when they return they return a pointer to a file data structure okay and that file data structure has inside of it the fact that this was successfully opened and potentially um if it was successfully opened uh that also has information required to do the reads or the writes depending on what you ask for an error is returned from the operating system and from the library in this case with a null file star so if your file star is zero or null then you know that it failed and you use this pointer that was returned for all subsequent operations okay and data in the high level streams are buffered in user space in addition to the kernel okay so now here's a question does the kernel buffer network traffic indefinitely before any data gets returned at all to read so um if you don't execute a read and you open a socket and a bunch of data arrives then what will happen is it'll start filling up in the socket and eventually that'll fill up and it'll back up to the sender and tell it to stop sending data and then as you start reading it'll pull data out of the network and empty the socket buffer and then things will get started again we'll talk about that later in the term to contrast this high level stream uh streaming infrastructure where there's actually buffering at user level we have the low level raw interface which is basically using system calls directly that's like open create and close okay and notice that what returns from them from open on success is a file descriptor okay and that file descriptor says uh which file was opened but the way it says that is not something you can figure out it's something the kernel does it has a table inside of file descriptor to file descriptive description okay data structures and so um you're going to get back an integer here that you're not going to know what to do with the one integer that does matter for instance is less than zero or minus one says this was a failure okay and then you got to check error and then finally since streams that's the high level and up in the um system calls that's a low level like open create close or tightly related each other if you take a stream and you run file no on it file number you'll actually get back the internal file descriptor that's uh part of that stream okay all right so the um the flags here are saying whether you're doing reading or writing to the file that's what you want to do to it the uh bit uh permissions are what other people can do to it so this is kind of what you want to do locally and the permissions are what other people can do for it okay um the question is does this lead to a vulnerability where other methods could try a random number to access a file they shouldn't so i'm assuming that what you mean is you randomly choose an integer and then you try to use it in read or write so the point is that all of the access control is done on open and then the kernel for your process puts a pointer into there of a mapping between the file descriptor number and the actual internals of the open file and the best you would get by randomly selecting something is uh maybe you'll pick one that was open but then you already have permission to use it because it's your process if you pick something that's not there there's no way you'll get another person's file because that mapping between numbers and open file descriptors are is actually uh unique to the process so random random descriptor numbers doesn't help you here now um we also talked about the representation of a process inside the kernel so if you look here um the process of course has its address space which we're going to do a lot with in a couple of weeks it's got registers for at least one thread which is there's always one primary thread in a process there could be more it's got this file descriptor table which maps numbers to open file descriptions and notice by the way there's always zero one and two that are started up when you start a process we didn't include them here but we did talk about them last time this is uh the standard out standard error in standard in okay um so this descriptor table gives you a redirection and each open file has a description that's in internal kernels data structures okay so file descriptors are per process file descriptions are not necessarily okay and we talked about that last time for instance here uh here's process one and two perhaps this is the parent process is one and the child is number two after fork you uh you copy the address space and the registers of the thread and the file descriptor table which happens to point to now a shared file description and if you take a look at the end of last uh lecture we talked about some of the uh good and bad consequences of this okay and then of course zero one and two are uh typically attached to the term terminal okay but uh on the other hand you can redirect them which is where piping comes into play okay the position variable is how many bytes you've read so far in the file except you've got to be careful because this is the position that the kernel knows of if you're using the streaming interfaces with f in front of them there's a different buffer inside of the user space that also keeps track of the position for your reading through the f read and f right and if these two uh so these two uh pointers are not necessarily the same and you should take a look i uh the very end of one of the recent lectures there's a discussion of that okay and yes the position variables how many bytes not that you've read so far it's the position of the next thing that's going to be read so that you can change the position with the various seek operations okay so if you were to seek back to 100 read seek back to 100 read you could do that over and over again and keep reading the same thing and this position wouldn't change in that case now um okay so that's a very quick reminder of of things i just wanted to talk to about some brief administrivia of course homework one is is almost due so hopefully you're making great progress on that project one should be in full swing so that's been released uh your groups have been set and your discussion sessions hopefully have been set so um you're all you're all up to for good here it's time to get moving make sure that you figure out how to have your partners meet uh regularly okay because that's important um you should be attending your permanent discussion session uh remember to turn on your camera and zoom and discussion attendance is mandatory so your tas can get to know you so that's important okay the other thing i'm sure you're well aware of is our first midterms coming up october 1st roughly two weeks from thursday sorry this is three weeks from tomorrow i didn't change that but it's two weeks from thursday um and uh be prepared okay um the last thing is again plan on how your group collaborates we're gonna be giving you guys credit for um showing us some selfies of all four of you uh talking in zoom with your cameras on but um except for just that you should consider doing that so try to meet multiple times a week because even in uh real space not virtual space people that don't meet regularly the projects end up failing at the end of the term and you don't want to do that so try to keep your groups moving okay now we had a couple of questions um on the uh on the chat here um so the question about uh since this calls are expensive is it possible to pre-request threads and then schedule them at user level the answer is yes and we'll talk more about that i'm going to give you a brief example toward the end of the lecture where we talk about thread pools for instance for web service web services so that's a good idea now the uh the selfie by the way that i was talking about is showing a video uh a um screen capture from zoom okay um because you're supposed to be using your cameras with meeting with your uh partners as well okay you don't have to have video uh screenshots fine um and then the other uh question is will the descriptor have the same value across processes only if uh the file descriptor is shared because you had a parent that uh executed fork then the child will have all the same file descriptors if you open the same file in different processes independently there's absolutely nothing that says that the file descriptors have to be the same unless you're doing some tricks with dupe or dupe too which is something you're going to learn well how to use okay so today we're going to talk about communicating between processes so what if a process there's multiple of them wants to communicate with another one why might they want to do that well perhaps they're sharing a task so both you know both of them are doing something or perhaps there's a cooperative venture with some security implications what do i mean by that well clearly if you have a bunch of threads in a single process it's easy for them to communicate but perhaps you don't trust everything that that other code is doing and so you'd like to have separate processes but then you want to have them communicate okay and this is not uncommon so the process abstraction is designed to discourage this right it's set up to make it hard for one process to mess with another one or the operating system okay that's by design that's a feature so we got to do something special that's agreed upon by both processes and so think of this as punching a hole in the security but doing so in a way that's uh okay to the two processes so we start off with no communication then we've got to communicate okay and we call this inter-process communication not surprisingly now if you remember i just wanted to re-emphasize this and we're going to talk a lot more about page table mappings and so on uh in a week or so but if you remember there's a page table that does these translation maps for you and it basically says that process one's code goes to the table and maps to some part of physical space that's different from process two's code so notice they're using completely different parts of the physical dram and the same for data heap and stack and as a result they can't alter each other's data right that's by design so that's part of our protection so um we got to figure out something else for communication and if you think about it we've already talked a lot about something that works right we've talked about how you could have a producer which is a writer and a consumer which is a reader separated in time communicate how do we do that with a file okay we already talked a lot about how when a parent process creates a child process they share uh the file descriptor table and so if you have a file that's been open for reading and writing and then you produce a child process then the two of you can exchange data through the disk okay so that's easy okay can anybody say why this might not be desirable yeah so slow why slow well you're not really trashing the disk per se but it is slow because what you're saying is in order for communication which is already in memory you've got to go out to disk and back okay so this this doesn't seem particularly desirable for that reason but i do want to point out that this idea of writing to some file descriptor and then reading from a file descriptor is our standard unix io mechanism so whatever we come up with is going to be very different here okay now i did see an interesting uh question in the chat and this is going to be the first time i tell you this today so here's your fact for the day does anybody have any idea of how many instructions you lose by waiting for a disk to pull data okay well it's not 100 billion but it is a million okay so a million is a good rule of thumb um especially when you have multi-issue processors that are running more than one thing at once so think at least a million okay and so going out to disk and back is not good it's very slow now of course what we haven't talked about yet is caching inside the kernel so in reality you could write and read without ever going out to disk but this interface by its very nature tries to push data out to disk and so i'm basically taking something that ought to be a quick communication through memory and you know adding a disk onto it for some goofy reason so this seems like this might not be always desirable and you may want something else when you don't care about keeping your data persistent is there a faster way yes there is now one thing we also won't talk about today is this do you see what i did here see the red so what i did was yes initially it was impossible for uh processor program one to talk to processor program two through memory because we mapped it that way but we can also choose to map certain parts of memory so that both of them share it so that's what's read here both mapped to the same page in memory and then you can do things like have data structures that are shared you can have linked lists that are shared all sorts of cool stuff so this is pretty uncontrolled but is fast and we'll talk about how to make this work after we've gone through how uh how um we can communicate and set this up okay so we're not gonna get there yet okay so we're going to need locks we're going to need a lot of stuff so before we go to this shared memory model let's understand a few things okay but today's inter-process communication is going to be a little different than this okay what else can we do all right so disks aren't great well what if we ask the kernel to help us in other ways like an in-memory queue okay that's a producer put stuff on the queue and the consumer consumes stuff and we'll use system calls for security reasons so we're not going to open up uh security and uh by the way you know if you do this shared memory thing you got to make sure that you're okay with the process complete the other process completely reading and writing the data that you're reading and writing okay so you have to do this carefully but what else could we do well here we go here's a queue okay so notice this is not a disk anymore but process a executes a write system call which puts stuff in the queue and a process b executes a read system call which re uh removes things from the queue and now suddenly we've got communication wouldn't that be great okay um so some details before we figure out how to do this some details of what we might want is uh for instance well when data is written by a it's held in memory until b reads it okay that sounds good um it's the same interface we use for files yeah that's good much more efficient because nothing goes to disk okay but we have some questions here like how do we set it up um what if a generates data faster than b can consume it then the queue is going to get full or what if b consumes data faster than a can produce it well then the queue is going to be empty so what do we want to do uh for these second things well what if a is generating data too fast what do we need to do anybody have any ideas so how do we tell it to slow down what what might be the simplest thing well not a lock yeah wait very good weight is the key okay so as i'm gonna teach you and you're gonna hear over and over again not a semaphore we haven't gotten there yet what you're gonna hear over and over again for me in the next couple of weeks is the way you solve synchronization problems is by waiting so in this particular instance what we want is when process a executes a write system call but the queue is full we want a to go to sleep okay and b if b tries to execute a read system call and the queue is empty we want to go to sleep right and the important part is that um we want uh once there becomes memory space if a is asleep we want to wake it up and finish the right system call and furthermore once there's data in q if b is asleep we want to wake it up and return from read okay now the question here is why weight rather than a lock well the answer is locking is all about waiting okay so this is a type of locking but it's a type of locking that's particularly convenient when we're uh doing writes and reads to a an api in the kernel because the kernel can put those threads to sleep and wake them up again when it's time okay um and deadlock here uh is only a problem if well there's no deadlock here because there's no cycles you might be saying is there a live lock issue here where b gets put to sleep and has never woken up that's a bug because process a has uh refused to put any data in there and in fact what you can do is you can set up reads to uh and rights to time out after a certain amount of time if they're uh if they're not satisfied so um it's not possible for process a to screw process b up uh if it doesn't write anything okay yeah if there's a cycle um that's a that's a different problem let's uh let's hold on to that for now okay all right so here's the first thing that looks exactly like that cue that i wanted to talk about which is the unix pipe um it's also part of posix and it's essentially just a queue we call it a pipe but process a writes to the pipe process p reads from it they use the same read and write interface we've talked about before and now we've got communication across process boundaries the memory buffer here is going to be finite okay well why because memory is finite and if producer a tries to write when the buffer is full it blocks and it's put to sleep until there's space and if consumer b tries to read when the buffer is empty it blocks which means it's put to sleep until there's data so this has exactly the semantics of what we wanted okay and there's a system called called pipe which you will become very familiar with soon which uh looks like this you you call pipe and you give it a pointer to a a two element array that can store two integers and why is that well we need to file descriptor for both ends of the pipe for both the input end and the output end and so what this pipe call does is it uh opens up creates a pipe and opens up two ends and returns two file descriptors so when you write on the the input end it goes into the pipe and when you read from the output end it comes out of the pipe okay now the question about how do we know if there's data in the pipe can somebody answer that do we have to monitor do we have to pull it every now and then to check hamming codes nope no hamming codes so how do we know there you go perfect we had a great answer there if process b is reading and there's no data it goes to sleep the kernel knows when process a writes because process a wrote okay the kernel knows this okay the pipe is not a it's not a separate process the pipe is just some memory in kernel space and so when process a goes to right the kernel as part of putting the data into the pipe checks and sees that well the read there's a reed waiting so it just wakes it up so because this is all running inside of the kernel the kernel knows okay and so the kernel knows when process a writes whether b needs to be woken up and it knows when b reads whether a needs to be woken up and so that's purely an advantage of being a kernel internal kernel interface okay all right questions so the pipe is not a process the pipe is just a cue inside of kernel memory whose interfaces are using system calls read and write okay this is not necessarily in general standard in and standard out this is uh you can do anything you want okay so you yourself could create a pipe with uh new file descriptors that aren't zero one or two okay um are there other examples of process beside read and write i'm not sure i understand the question so processes do all sorts of stuff okay um but reads and writes are the way that we do communication either with other processes or with the file system okay so you get two new file descriptors exactly this is an array of two file descriptors and i'm going to show you for instance uh an example here so here's an example where we actually make an array of integers that's got two slots in it that's this uh int pipe fd of two and then i call the pipe system call by saying pipe i give it the pointer to that array and if what comes back is a minus one then that's a failure okay that's a pretty standard idea in unix and we say there was a failure in return otherwise we succeeded and now we have two file descriptors for two ends of the pipe and so the uh pipe fd of one is the right end and pipe fd of zero is the read end and you should do a man on pipe by the way to see the the uh interface there but so all we have to do for instance is if i have a message which is message in a pipe and i s and i write that into the pipe and i have an extra plus one here after string length this lets me make sure i write the not only the message but also the uh null at the end and when i'm done it writes to the pipe and then on and then i immediately read from the pipe and i just get the data back okay now why are there two closed calls because there's two file descriptors open a right end and a read end oh by the way why is it say pipe fd of zero yes that's a bug hold on sorry my uh my mistake now so um sorry about that so now if you uh as we're continuing let's take a look at this so let's look at what else we can do so let's do pipes between processors okay so um the question here about where is the data it's buffered in kernel space yes so because we're using system calls like write and read we're going from user space into the kernel to access that pipe um and so the buffering is entirely in the kernel okay all right now so this right now this is only one process so it creates the pipe and then it uses it so this is this code example is a little uh goofy because the process writes into the pipe and then immediately reads from the same pipe okay so there's no there's no two processes hold on just a sec okay we're getting to that example and how do we get to that example well we execute pipe which gives us two file descriptors there it is so there's the um the first file descriptor is the read end and the second one is the right end okay and then when we do fork poof all right now we've got a parent process and a child process that are sharing a pipe so now if you notice what i did earlier here i said was a little goofy as i wrote in the right end and i read from the read end i actually have as an option here now both processes the parent can read and write the pipe and the child can read and write the pipe okay but that's a little goofy right so um what we typically do is the following we we uh generate the pipe and then we fork okay which i already kind of showed you in this picture but now depending on what we do we close one file descriptor in one process and the other one and the other one so for instance here if pid is not zero then um we are the um we are the parent and what we're saying is really should be a pid greater than zero sorry about that we are the parent and in that case we uh write to the uh the right end which is number one and we close the read end whereas in the child uh we read from the read end but close the right end okay now the question here of can we use the heap for the pipe the answer is the kernel's got the pipe so you don't have any control over where it is now you may ask the maybe you're asking the question here of where is this uh array with two file descriptors in it certainly you could use the heap for that if you wanted although it's probably not necessary because you're probably going to basically create a pipe in some uh place and then use it right away but you could certainly put the the two file descriptors in the heap if you liked okay and if you wanted two-way communication you don't really need to have uh well you don't have to have two pipes but then the communication would get interleaved but you could create two pipes one for each direction certainly okay and we'll get to what happens with closure in a moment here so the the answer to the question on the chat is if if you have a file description table entry and there's anybody still pointing to it then it stays uh open okay so writing to the read end and reading from the right end is not guaranteed to do anything useful okay so um here's in graphics i wanted to show you so we're making a channel from parent to child we've already done fork as you can see here we did pipe and fork so what we're going to do is we're going to close uh three on the parent side because we're not going to read at the parent side and we're going to close four over here on the child side and now that we're done we have uh the ability for the parent to write into the pipe and then it gets read from the child and so now we can send um data a stream of data from parent to child but we could do the opposite so here we could close four on the parent side and close three on the child side and now the child can send data the parent process okay and um as was asked earlier could we make two pipes certainly we could make a pipe to go from parent to child and child to parent and they would be separate from each other because they'd have separate cues okay how do you get end of file in a pipe so you know think about this for a moment a pipe is just a cue in memory so what does end of file mean what it means is there's going to be no more data coming and so what happens is after the last write descriptor is closed the pipe's effectively closed because it only returns eof after the last read descriptor is closed if a write tries to write it'll get the so-called sig pipe signal we talked about signals last time and if the process ignores that sig pipe signal then the right will fail with an e-pipe error okay so you could either capture the sig pipe signal or you could get an error back from right those are a couple of options and so in this instance here we close file descriptor 4 so now that pipe is hanging we're not going to garbage collect the pipe yet because there's still a file descriptor pointing at it but what you can see here is that the only thing that process 2 is going to get out of reading that cof end of file okay all right now once we have communication we need a protocol so protocol is an agreement on how to communicate so in the case of that um that pipe yeah we can send a stream of bytes from parent to child but how does the child interpret that well we may need to decide to put them into packets so there are some system calls like send message receive message you could do for that or you could packetize it yourself and say well i'm going to send you a stream of bytes where the first one says how many bytes are in the data structure and then i put those number of bytes but that's starting to become a protocol where there's an agreement for how the bytes are formatted in the channel okay and we're not going to go into this much today but just to get you thinking here okay um you've got a syntax to that protocol which says how are the bytes uh structured together um you know we always have this fight is followed by those bytes whatever and then semantics of what that means um oftentimes you can describe this by a state machine so protocols can get pretty sophisticated we're going to talk about the tcpip protocol later in the term um and another thing which we're not going to talk about today but also later in the term is the fact that across the network for instance you may need to even translate from one machine uh representation to another so if you remember the big endian little endian discussion from 61c it could be that when you send a message from uh one process to another that that other process looks at integers a different way and you need to reformat the messages to match what they are at the other end now the question here about can you use higher level constructs like f open and f read on pipes what you're going to do in that instance is you'd create the pipe and then you can wrap a file star around it there are there are system calls to do that okay called fdo open for instance okay um this is not quite uh it's similar maybe to control f endings replaced by line feed endings under some circumstances perhaps okay and yes this is decoding and encoding but it's needs to be agreed upon via a standard and so that whole idea of what is the standard for encoding and decoding gets pretty interesting but later okay we'll talk about it and by the way another word you might be aware of is serialization you probably talked about that in some of your classes like 61b okay so some examples um some examples here yes people are mentioning things like utf-8 and uh 16 and so on that's also part of it here's a simple uh protocol your telephone you pick up uh the phone you listen for a dial tone or see that you have service um not too many people not too many of you probably even know what a dial tone is anymore maybe you do but then you dial a number you hear ringing and then all of a sudden on the other side you hear hello and then you say hi it's john or my favorite is hi it's me it's like well how do you know who it is um but you might say something like how do you think blah blah blah blah blah the other side said yeah blah blah blah maybe you wait a little bit to think then you say goodbye they say goodbye and you hang up and this is actually a protocol where the ringing uh the expectation is that somebody at the other side says hello it's always a little crazy when you get a spam call and they don't and then that hello leads to the initiator the call saying what it's about which then uh gets a response back okay which eventually causes a closing of the channel okay saying goodbye and then you hang up and these round trips here are very similar to what happens with tcpip with the fin messages and so on okay so um the protocol we're going to talk about for today's the rest of today's lecture is this web server request reply protocol and uh there's a communication channel of some sort that we need to figure out how to discuss in the middle here but the client might say request over the network say and then the web server would give you a reply and there's a very carefully uh constructed protocol here okay and this uh communication from the client to the web server is certainly going to be running tcp ip but the um there's more to it because you've got to satisfy http so there's actually uh some standard protocol with the headers and so on okay all right so this idea of cross network ipc is an interesting one because potentially you could have one server serving a whole num large number of clients and many clients accessing a common server starts yielding some interesting questions like how does the server keep track of the clients okay so how would the server keep tracking the clients anybody have any idea there okay so maybe every client has a different ip address well if you're anything like um uh you know like myself when you use a web browser firefox whatever your favorite chrome uh notice there may be a bunch of tabs uh or there may be uh a bunch of pieces inside and in that case there may be many clients that the server is interacting with that are all at the same ip address so then what okay okay i see a lot of sockets and cookies um i p address plus mac address no that's not going to help you sequence numbers okay so i'm going to try to answer this question oh i saw somebody say port that's exactly right so each unique communication which we're going to talk about here has both an ip address and a port on each side and a protocol and as a result each uh communication channel is unique okay and so the unique id is going to be a five tuple uh that we're going to talk about in a moment okay so the client let's make sure we understand first of all what we mean by client server so a client is somebody that asks for service from a remote server and uh the clients are sometimes on okay they uh you turn your computers off your turn your um you know you turn your cell phones off sometimes but it's the thing that typically initiates a contact like here's a get over http for index.html a server on the other hand is typically always on up on some well-known address that uh can be accessed by a client and so it doesn't typically initiate contact with clients but it needs a fixed well-known address and port in order to be findable by clients and of course you make you make your request and you get some response back um now what's a network connection let's be really basic so for this lecture it's a bi-directional stream of bytes between two processes that might actually be on different machines for now we're going to be discussing tcp okay which is uh is the basically the control protocol that's used for um across the network and does error recovery okay and so it's a unique stream of information okay um abstractly a connection between two endpoints a and b has a queue going in both directions so there's a queue from data sent from a to b and from b to a which is just like we were talking about with pipes except that this is potentially across the network could be on the same machine might be in the same building could be on different uh continents okay it could even be i suppose between here and the moon and back if there's somebody up there so we need something to help us with this and the socket abstraction is this idea of an endpoint for communication and the key idea here is communication across the world once again is going to look like file io with reads and writes okay so here we go so we have process uh one process is gonna for instance do a write okay and that's gonna go into a data structure we'll get to called a socket here which is gonna cause the communication to go across the network to another q and which point process b can read from that other end of the socket and we get communication and because we're going to be using tcpip then we don't have to worry about errors in the middle here or anything okay okay the difference between port and socket is a port is uh uh describing a unique communication the socket is a data structure including a queue okay hopefully you'll see the difference about that in a moment okay um if you don't by the end of the lecture make sure to ask again now just as we were talking about with pipes if we go to read on one side and there's no data that process gets put to sleep until the data shows up okay so sockets are endpoints for communication they're cues to hold results um two sockets connecting over the network gives you inter uh process control or inter-process communication over the network and this sounds great but now you got to start asking questions like how do you open this what's the name space how are you connect them okay so um there are lots of different types of sockets it's true but not all pipes are sockets okay there are ways to get things like pipes uh that don't have sockets internally and there's also ways of connecting sockets internally that act like pipes okay so for now the pipe the native pipe implementation is actually not the socket implementation on a lot of unix distributions okay so we need to figure out how to connect all of this all right um so what are some more details so the first detail um is that sockets are pretty ubiquitous um what i said about posix not being ubiquitous everywhere is not true about sockets so sockets are pretty much implemented on almost any operating system that wants to communicate over the network okay you pick it it's got it um it was standardized by posix but this is part of the standard that is always there okay the thing you ought to know about which is fun is that sockets came from berkeley and the berkeley standard distribution unix version 4.2 was the one that first introduced sockets all right definitely go bearers on this this release had a whole bunch of benefits to it and a lot of excitement uh from potential users in fact people that were there at the time it was released have told me stories about uh how there were runners waiting to uh get the tapes that had the latest release on them so that they could uh quickly take them to where they were going to be uploaded and to their computers and run so berkeley 4.2 bsd had a lot of buzz okay go bears and so you can be proud of that now um the same abstraction is for any type of network so you can be local within the same machine so as i mentioned before you could imagine two sockets being connected inside a machine using the sockets libraries in the kernel and it would look like a pipe but what i said earlier is that not all pipes implementations use sockets okay because it's a simpler interface the internet um you can go across with tcpip and udpip and at the time of 4.2 bsd there were a whole bunch of other networking protocols so tcp ip and udpip were not the only ones there was apple talk and ipx and a whole bunch of native ones some of which still live in deep recesses of the network okay now um yeah there's 162 participants in our uh in our class right now that is pretty funny um so more details on sockets um let's just uh it looks like a file with a file descriptor so once again our standard uh our standard idea that all i o is looks like reads and writes to files is going to be true with sockets okay so write adds output read removes input okay now since this is an i this is io there's no notion of lseq so there's an example of what you might think is a part of the standard interface that just doesn't make any sense for sockets it also doesn't make any sense for pipes okay now how can we how can we use sockets to support real applications well a byte stream by itself is not necessarily useful okay so a bi-directional byte stream uh has no boundaries to messages it doesn't necessarily have any interpretation so we already talked about this you need to add syntax and semantics you possibly need to have a serialization mechanism okay and so we will talk at another uh time later about rpc facilities and so on okay now there was a question about kafka which is a different thing so we'll uh we'll talk to that about the end of the term okay so there is no notion by the way as of append here because there's no notion of seek so when you write it just goes to the end of the socket so sockets keep things in order just as tcpip keeps the stream in order okay so there's no append in this instance because you can't seek now um and or or the other way to say that is every write is in a pin okay so let's uh dive right in with a simple example here so i'm going to build we're going to call it a web server but it isn't really doing http so this is a little bit of a misnomer but let's suppose that the client sends a message and the server echoes it that's it so the client sends it server echos it okay so it's an echo server and what do that might look like well here i have an example of the network you could say the left side is i don't know berkeley and the right side is beijing and we've set up a socket between the two now what that means is uh the green boxes the two of them on the left are part of the same socket they're just the two cues going either direction and the two green boxes on the right are part of the sockets on the server side okay so we have two boxes for the client two for the server and that's because it's bi-directional when you set up something okay now um the first thing that i kind of indicated already is the server it's going to set up these sockets which we don't have any idea how to do quite yet but what it's going to do is it's going to immediately do a read and of course the socket is empty on the read side and so all that's going to happen is it's going to enter the kernel and it's going to wait okay and we saw that earlier when i showed you the the web server example at the very beginning the lecture what happened is you did a read and if there wasn't any data you went to wait okay meanwhile a client comes along and it's going to set up an echo so one of the things that we need to do is uh from the user we have to figure out uh what they want to send and so maybe we do an f get string from standard in okay so this is a streaming input which will wait until you hit a carriage return um and then it's going to send the data over the socket by writing it okay so it's a write system call to the socket file descriptor here's our buffer and notice because i say string length of send buff plus one i'm sending uh the the null character at the end of the string in addition to the string this is things to start thinking about as you get comfortable with c um and meanwhile that write can go on right away without the data actually going out as you remember because rights are buffered in the kernel and so yes the socket's going to try to send it but we return from the right almost immediately at which point we go and try to read to wait for the response and of course we go to sleep because there's nothing in uh there's nothing in the read side on the client of the socket just like there's nothing on the read side for the server okay meanwhile back at the fort the uh right gets sent out across the network to the other side at which point uh the data wakes up the read process the server process wakes up it might print the thing on the local console okay and um and then it also writes the echo back okay at which point it gets sent across the network it wakes up the client maybe print something on the screen and then of course we can loop back and do it over and over again okay and now we have an echo server okay so the fgets just to be clear here is only asking for the user here to type in the string that they want to send it's the right that actually sends it across the network okay so what it means here uh the green boxes are the socket pieces inside the kernel these white boxes are representing code places where you're interacting with the kernel so mostly on either side the client and the server server are user level the green boxes are in the kernel and occasionally when i do a write or a read i enter one of the white boxes okay and potentially i have to wait all right now um you can force you can try to force the uh the kernel to send the data but in fact and there are there are ways to do that with flush but by and large it'll just send it right away so you're not you're not too worried about that okay now this is not four sockets this is only two sockets okay it's a socket is a double-sided endpoint for communication so the two greens on the left are the client and the two greens on the right are the socket and the server okay so this is only two sockets and each side has two cues that's why there are four green boxes okay now let's look at this in code a little bit okay so the client code what you see here is we have to get ourselves a buffer which has some maximum uh input size um so a socket is bi-directional because there are two cues inside of it okay a pipe only has one queue so it's unidirectional yes if you look here we had to get ourselves some character buffers and the max in and max out are defined somewhere else in this file and then we go over and over again and we basically say uh we grab the send buff oh i guess i temporarily broke this code i apologize but um forget the while assume this is while true we basically grab the send buff uh from the user and then we write it out okay and then we clear out the receive buff and we read it and then we just keep looping okay and the same on the server side we read the data we uh write it to um standard out and we uh echo it okay and so what happens is our right goes across the network and wakes up a read and then the right on this side goes across the network and wakes up a right and this repeats over and over again yes it looks like dna it's true now and notice it's it looks like dna i guess yeah so what assumptions are we being are we making here so one of the things is we have no error correction code for what happens if a packet's lost okay because we're assuming that if you write uh data gets read back so with a file unless your disk is full the assumption is always when you write to it you can read it back when you write to a tcp socket the assumption is the read on the other side happens okay it's like pipes okay if you put it in it'll come out on the other side okay let's uh let's hold off on the chatter on on the uh the chat for now okay um and the other thing that's important is the assumption that um we have an in-order sequential stream so when i put data multiple writes into the input side of a socket on the opposite side it'll come out in exactly the same order not a different order so that that's a property of the tcpip protocol every byte that goes in comes out in the same order and comes out only once okay so that's a really nice semantic and it's why everybody loves tcpip okay there are some disadvantages to tcpip but this is a pretty big advantage okay and so when you're ready uh when the data is ready on the other side what happens well the file read gets whatever's there at the time okay so this is why to do a real version of this you need to check uh you need to come up with a protocol that says i'm gonna maybe write uh into the the first thing i write is the number of bytes to expect and then i write those bytes and then on the other side i read the number of bytes i'm expecting and i keep looping with read until i get that number of bytes so to really do this correctly you need to have a protocol that you've defined that lets you do things like message boundaries okay but for now we're not worrying so much about this we're also assuming that we block if nothing's arrived yet just like pipes okay so tcpip plus uh sockets is very much like a bi-directional pipe that goes across the globe okay it's it's a very simple pipe to two ends of the planet which is pretty nice or a pipe on the same machine or a pipe to different machines in the same building they all act with exactly the same interface okay all right um now socket creation i think we might be interested in here okay so for instance file systems uh provide a collection of permanent objects in a structured namespace all right so if you think about it the um the uh whole point of the file system is that i can name a file so that i can open it you know slash home slash kuby slash uh classes slash cs 161 162. whatever there is a namespace the problem with sockets is what's the namespace okay so files exist independently of processes and it's very easy to name a file with open but when you start talking about sockets sockets are kind of by their very nature transient and really only functional when they're connected okay so pipes partially get us that way right it's one commit one-way communication between processes on the same physical machine it's got a single queue it's created transiently by pipe and it's passed from uh parent to child in a way that allows us to share between two processes and notice that in that instance there isn't any name space per se but rather we called pipe and the fact that the file descriptors are now shared is how we end up with the connection between the two processes okay the reason a pipe is unidirectional is although the two processes each have a right pointer to the right hand and the read end if they both try to write the the the data will get interleaved and so i don't consider that bi-directional because you can't have two clean communications you get too garbled combined communication so that's why you always end up closing one channel or another and if you really want bi-directional communication with the pipe as i said earlier in the lecture you create two pipes okay now sockets have this problem that a we're not on the same kernel so you know that's a little bit of a problem um and we need to somehow address something all the way across the planet how do we do that well it does have the two cues for uh you know communication in each direction processes can be on separate machines um so there's no common ancestor to pass something from one to the other in fact we could be here in berkeley and in beijing or pick your favorite uh other place and um how do we name it there's certainly no common ancestor of those processes all right so um what are we going to do well the name space of course is ip so you're all very familiar with this namespace so for instance a hostname like www.eecs.berkeley.edu is an example of a name that can be uniquely identified across the network and use to route traffic to it now of course we're going to have to talk about things like dns and so on later in the term but that host name really translates directly to ip okay and so what is ip well ip addresses depending on whether you have ipv4 or ipv6 are 32 or 128 bit integers and so for instance www.eecs.berkeley.edu would translate into some ip address okay which now would allow us to actually communicate across the network but as i mentioned earlier the ip address is not enough if you have a browser with a bunch of tabs in it each one of those tabs has the same ip address because there's only one machine and so you need a way to uniquely name a connection and that's where ports come into play ports are part of the tcp ip and udpip spec they're 16 bits so there's only really 65 536 of them and the first 1024 are called well-known okay um and the well-known uh ports are ones that are are much harder for you to uh bind anything to and in fact you're going to need to be super user to use them there are some ports between 1024 and 49 151 which are typically registered ports like for instance 25 565 happens to be the uh the port for uh minecraft server that's an important one for you all to remember and then there's a bunch of dynamic ports or private ones and you'll see in a moment uh what they're about okay so if we look it's a connection set up over tcpip we're going to need something that's special here we the server needs to set up a the process of uh waiting for a client to connect and that's called a server socket so the server uh basically produces a server socket okay and that server socket listens on typically well-known ports that have been registered uh with a standardization agency and you can you could register them but it's very hard to get the ports in that lower 1024 registered typically people have ports that are just well known in the higher portions okay but now once the server socket is set up now the client uh will be able to communicate which is because this socket the the thing the server does after creating it is it says listen which says um go to sleep waiting for an incoming connection okay listening see that's an ear by the way so this client creates one its end of the socket sets a request to the other end by using the ip address and the standard port and at that point the server executes an accept which says well take this connection and let's make it real enough that we can communicate so in that instance the server says oh i accept and the kernel then takes this connection creates another endpoint and notice these are both green and the either end of them there's a final connection phase and now when you're done those two ends represent two ends of a bi directional socket and this is tcpip yes okay and when you do ping okay ping is not uh is is different than this so ping uh does not set up a connection ping is the icmp protocol which is just a datagram protocol okay so we haven't gotten to por we've talked about ports a couple of times but ports are really what's going to make this connection unique okay so let me talk about ports again so both sides of the socket let's just look at the green ones have associated with them a five tuple which is the source address the source ip address the destination ip address the source port the destination port and the fact that this is a protocol like tcp ip together those five things mean that this yellow connection is unique from all other um connections that we might make between those two servers okay or between those two ip addresses excuse me okay so um why how does this work well i already mentioned that the client side of this connection um is typically in that upper range above 49 000 of randomly or dynamically assigned uh port numbers so when a client first does this connection they assign themselves a random port so now they have their ip address a random new port for that connection the server side has its own ip address which is what i'm remotely connecting to and it's well known port so here's a good one 80 is a port you all ought to know okay that's basically the typical web server port and so what this connection did is it went from the client up to the server socket and said hi i'd like to make a connection on port 80 and the server says okay i will make that connection for you and when you're done you have two sides those are the two green sides of a connected set of sockets each green thing is a socket on either side and why is this yellow one different from any other yellow ones well because it's got these one of these five things on the left are unique okay so what is a port again a port is a 16-bit integer that helps define a unique connection okay so each server socket has a a particular port that it's bound to so i'll show you this in a moment but this server socket if this were a web server it would be bound to port 80 and so the incoming connection is asking for that ip address at that port 80 and that's the connection that's being requested and so all yellow connections for this server socket are all going to have the same destination i p address and destination port number but they're going to have different ip addresses or ip or ports for the client okay this is not ping so ping is something completely different it's kind of like our echo server but it's a it's a datagram protocol okay this is tcpip okay now the client tells the server what its ip address and por and uh ip address import is the server knows what its ip address import is and so when you're done you have a unique connection if the same socket excuse me if the same client wants to make another one then it needs to come up with a unique port for its side because otherwise there wouldn't be a unique yellow connection so in that example of the web browser we talked about with all the tabs every tab would have a different local port associated with it even though we're talking to the same let's say remote server that has the same ip address and is all port 80. okay all right i'm going to move on from this um but just keep in mind that this every yellow connection has this unique property of a unique five tuple okay and 80 is a common one that's web uh browsing without um web browsing without any security 443 is the https protocol 25 is send mail etc so the in the uh question about localhost colon 500 says it's port 500 on the local machine yes that is correct good in fact you could see sometimes people that have local servers that they're using for iot devices for instance it'll often be ip address colon 8080 or colon 8000 that's pretty common okay now so all the server sockets are not operating out of the same port there's one server socket operating on the port 80 and it spawns all the new sockets that are communicating with port 80. there if you have port 443 or you have some other port like 25 that will be a different server socket that spawned listening on port 25. so there's one server socket for all the port 80s connections one server socket for all of the port 443 et cetera connections okay now so in concept what happens here is the server creates the server socket that's this blue one and part of that creation is uh binding it to an address which is its current host ip address and the port like 80 that it's going to do service on and then it's going to execute listen which means at that point we are listening for the connection for incoming okay and basically we are going to try to execute accept and that'll put us to sleep until somebody actually comes in okay so later uh the client creates a socket and it does a connect operation which says i want to connect to a remote server that has this host name or this host excuse me ip address import which uh assuming we've did this correctly will go across to the server uh which is busy listening um the the system call will accept it in which case this three uh three-way protocol happens and when we're done we now have a connection socket on either side with a unique five tuple defining this is a unique connection and every subsequent client that tries to do this will get a different unique five tuple or unique connection okay now the once the server's ready it says oh i have a socket okay i'm going to do a read request on the socket well of course that's going to go to sleep right away until the write request from the client comes in and says i want to look at some http address and meanwhile there'll be a read okay on the other side that wakes up writes a response that will get sent to the other side and we do this combined write the request wait for the response here we go read the request write the response on either side okay so each connection socket the server owns has a different port that is correct okay now when we're done we close the client socket and then the server goes back and does another accept and that's how we serve multiple requests for now okay now you can see probably right off the bat uh if by the way there's no race condition because the incoming connection requests go into the server and they're put into the queue using synchronization uh that we haven't talked about yet okay no race conditions um so the client protocol that you see here is pretty simple so look we uh we first get an adder info structure defining uh the host that we're trying to connect to okay with hostname and port name and so this is look up the host basically i'm not going to show you that until the end of the lecture if we get there but this is going to return a unique hostname port name combination for who i'm trying to communicate with i'm going to create the socket file descriptor and then um so that's now i have a socket here so the client's got this file descriptor which is an integer the socket structure is inside the kernel okay and then i tried to do a connect and that connect immediately says uh waits until the connection finishes and then when it emerges this sock file descriptor is no longer a disconnected socket it's a connected socket and now we uh can go ahead and do our client operations whatever that are whatever they might be which could be doing lots of reads and requests and over and over again and then closing okay the server side is a similar idea but it's it's got this server socket so if you look here we set up uh which address family we want we bind it which means basically we say that we want to listen on a certain address and port okay and so binding basically attaches an address to a socket creating the socket just makes the cue we bind an address to it and then we do listen instead of connect and now in this while loop over and over again we accept the next connection we process it we close we go and accept another one okay and so we just do this over and over again and we're good to go can anyone see what's wrong with this protocol what seems unfortunate about this particular server implementation yeah one connection at a time right so this can't be good so uh what can we do well first of all how might we protect ourselves because if you notice we're kind of running right here we're kind of running in the same process over and over again we might want to protect ourselves so what we can do in that case is actually take what i just showed you and add a fork and let the child communicate with the client and do a wait until the child is done close the connection uh and go back okay and notice when we fork because we fork the listen socket is gonna end up on both sides and of course the child doesn't need the listen socket because it's not a server so it closes the listen socket on the other hand the parent doesn't need the connection socket because it's not serving the client and so it's going to close the connection socket so this is just like the pipe example i gave you earlier where we create a pipe we fork and then each side closes one of the two file descriptors all right we're not serving multiple yet we're just putting protection in here so now every child uh is running in a protected environment we haven't gotten to the multiple yet okay but you can see we're coming with that the only thing i did that's a little different here is i when i once i accepted that incoming connection i four and here if i'm the child which is pid equal to zero i close the server socket i at that point i go ahead and serve the client and then i close the connection socket and exit meanwhile the server closes the connection socket because it doesn't communicate with the clients and it waits okay so this is all that we changed but of course we want concurrency okay or parallelism if that's available at least concurrency would be even better because if we could have multiple clients requests going on simultaneously then when one of them is sleeping because it's doing a disk access the other one could be being served okay so even if we don't get parallelism we still want more than one request going on at once so we've kind of broken that so far okay and why do we need protection well um what we can do is we can in that other process we can limit access uh to only those things that let it connect with a small part of the uh file system or whatever to make sure that uh we are safe okay and just uh i know we're running low on time hold on for just a second we're almost done the question of why we're closing sockets here is because when we fork we fork all of the file descriptors and on the child side it doesn't need the server socket that's being listened on and the and the parent doesn't need the connection socket okay and so we're closing either side so here's a simple example where what we're going to do is we're not waiting here right so after we fork the parent closes the connection socket and goes back and accepts another one immediately okay and yes the child could listen all right so what you do with the child code is you make sure to do all that closing and then you set up your environment before you start doing the processing okay but if you notice here we close the connection socket and we immediately go accept another one so all i did by removing that weight right here you see the i commented out weight now suddenly we have concurrency okay multiple requests at once now there's a comment in the chat saying this seems heavyweight well it is because we're creating a brand new process every time okay now it's uh it's not it's the same okay so let's be careful about this i want to i see some chatter in the chat i want to make sure we got it uh server socket is the same all the time but we don't use it for communication we use it for listening okay so the parent has the server socket and it's the one doing accept and so it just do that does that over and over again and each time accept comes back it comes with a new socket connection okay and each child gets a different socket connection so the parent keeps accepting new connections every one of these connections is a unique because it's got a different file five tuple it's got a different at least remote ip address and port combination could be just the port but something there is unique and it's got a unique process so every loop that every accept gets a new process for a new child uh connection okay um now just before we uh finish up a little bit here so the server address so one of the ways we set up the address at the server side is we say which port we're interested in and you may ask if the port is a 16-bit integer why is this a char star well many of these interfaces you can do man on them basically take in a char star which is a string representation of a number okay but anyway what we do here is we set up things like what family are we communicating with with this socket and the way you've got to think about families here is that you when sockets first came out there was not only ip out there there were many other options so what we're basically saving is we're going to be a stream which means tcp ip we're not going to say what family it is because we're going to take whatever comes in and we're going to bind to a particular server and port okay there's a flip side with the client which is probably more interesting you guys should look at this after the lecture but if what comes in is uh a particular host name and port that we're interested in then we can look it up by using something called git adder info and what that does is that returns uh a structure the server structure which is an adder info that has all the information about what i p address and port uh we are so that then we can bind for the server socket okay so finally uh if we're willing to not have protection on every uh connection but instead we wanted lightweight we could do threads so here's an example where instead of fork all we do is we create a thread the spawned thread handles the request and the main thread just goes back and accepts again okay and so now that's a thread per connection that sounds great unless you get slash dotted okay and you could easily have a situation where so much incoming traffic spawns so many threads that you crash your kernel this is bad so what should you do there how do you prevent uh well it's true you can't fork but we're not forking here right now anyway we're just doing threads limit the number of threads great okay and the way we do that i'm only going to start talking about this briefly today but the way we limit threads is we can create something called a thread pool which has this basic idea where we create a bunch of threads at the beginning but it's a fixed number and then every time an incoming request comes in we put the connection on an incoming queue and then when a thread becomes free it just goes back dequeues the next connection and handles it okay so um this is a way of thread pool is a way of bounding the number of threads all right so we're done for today so in conclusion we've been talking about inter-process communication with how to get communication facilities between different environments namely different processes okay pipes are an abstraction of a single cue and you can create it in a parent and then uh pass it off to children and decide which direction you want it to go sockets are an abstraction of two cues but across the network potentially and you you have two ends you have a read end and a right end on both sides okay so you can have two streams that are not interleaved with each other you get file descriptors back from the socket it gives you a single file descriptor that you can both read and write to the same file descriptor so this is different from a pipe it's one file descriptor that handles both reads and writes and the direction that things go in depends on whether you're reading or writing okay and you can inherit file descriptors with fork facilities which is why for instance here when we did this example uh when we forked we end up we ended up with all of the sockets on the child side and the parent side which meant the child and the parent had to close off uh the sockets they weren't using all right um i think we're good for now um i'm gonna call it a night thanks for hanging with me everybody and uh we'll see you on wednesday have a good night |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_6_Synchronization_1_Concurrency_and_Mutual_Exclusion.txt | okay everybody welcome back to 162. um today we're going to dive into some actual implementation details and start talking about uh how threads are implemented in the kernel and um some things you need to worry about synchronization so uh welcome back um if you remember from last time we were talking about high level apis uh today we're going to be talking about synchronization in particular we're going to start by understanding how the operating system gives you concurrency through threads and brief discussion of process thread states and scheduling some high level discussion of how stacks contribute to concurrency um on monday we'll talk more about uh what pintos does to give you threads and even dive deeper but today we're going to start diving into the kernel and then we're going to talk about why we need synchronization and then we're going to explore locks and semaphores in a little more detail so if you recall though from last time we talked about inter-process communication or ipc and that was a mechanism to create communication channel between distinct processes and the reason we wanted to do that was well we started with all of this work to make sure processes were isolated from each other but then we need to figure out how to selectively punch holes uh into that protection so that those processes can communicate when they want to and um we uh are going to need to start thinking about protocols so maybe there's a serialization format especially if you go across the network one good thing about having processes rather than combining everything into one process is you can have failure isolation that can get interesting we'll talk more about that later in the term and uh there's many uses and interaction patterns here once you have processes possibly spreading across the network and then you combine them together you can do all sorts of interesting things and uh toward the end of the term we're going to even talk about um peer-to-peer style communications and cloud communications as well so the other thing we were talking about is we talked about types of ipc so for instance we talked about unix pipes and the idea here is very simple a unix pipe is a data structure cue inside the kernel one process can write to uh the the input end of the pipe and the other process can read from it and notice that we're using the read and write system calls the low level raw interfaces just like we would if that were a file except that this is a in-memory queue of limited size and so as a result it's more efficient if we're not trying to make things persistent so the memory buffer is finite which basically means that if the producer tries to write it uh and the buffer is full then it blocks which means it goes to sleep and if the consumer tries to read when the buffer is empty it blocks which means it goes to sleep today we're going to start understanding what it means to get put to sleep and how that actually works okay we also talked briefly about this particular system called pipe that takes a two entry array of file descriptors and it fills them with uh the read and write into the pipe and we also talked about then how to go through with uh fork to set up communication between two processes so you should take a look at uh the last part of the lecture um last time uh the other thing is we talked about sockets and um key idea here was that um well pipes are communication on the same machine but we could have communication across the world that looks also like file i o and so we had this notion of a socket which is a bi-directional communication channel now pipes of course were single direction um kind of like half duplex if you will a socket is an endpoint for communication and bi-directional communication and so uh you have a socket on either side there's a process for connecting which we talked about and notice that cues the green things here that are inside the sockets are not pipes there was some discussion of that on piazza cues are places to hold and temporarily hold information so when you take two sockets and you connect them together now you actually have a communication channel two directions that are independent of each other from process to process and that could be on the same machine it could be in the local area network it could be uh spanning the globe and so as part of that discussion we did a brief talk about how sockets get set up for tcpip and if you see the two green sockets here are the final communication but we we talked about how you set up a server socket that is bound on a certain port the first socket of the client side requests a connection the server socket then produces a new socket just for that connection and now this yellow channel between the two sockets is a unique channel and it's defined uniquely by these five numbers that you see at the left the source ip address the destination ip address the source port number destination port number and protocol where we're talking tcp in this particular instance and the client side of this socket often has a random port and that's why in fact you can have multiple tabs on a browser that all connect to the same website and act independently uh on the server side you have well-known ports like 80 for the web or 443 for secure web 25 percent mail et cetera and the well known ports are all from zero to 1024. okay and then last but not least we went through several different versions of a web server like protocol this was the very first one we looked at and we talked about how you you generate the sockets that's the first little red thing in this code that generation has a has a family of uh addresses and so on protocols that it's in then you bind the address to that socket okay and that's where what comes into play is what support is it interested in uh serving and what's the local address and then you listen and that listen is exactly what you see here when you see the ear right that's listening for incoming connections and then in a loop you accept the next connection and what comes out of accept is a brand new uh file descriptor which is the uh file descriptor opposite uh side for the server and so that's um this green socket comes in from accept and then you can do anything you want with it and uh just as you might remember this particular instance that we gave was uh an instance that was had no parallelism and so this basically takes connections one at a time if you want to see our discussion on uh several variants of that afterwards take a look at uh the lecture from last time all right that was where i wanted to do as a quick summary did we have any questions on that before i move on to some new material is everybody good yeah i see that our i see our numbers are a little lower today hopefully hopefully others are just a little delayed you guys are all the the most gung-ho of the students here okay yeah homework one is due that might have something to do with it okay so um today let's talk now about implementation okay so multiplexing processes uh we have a process control block which we've been discussing kind of uh indirectly throughout the first couple of lectures and it's really basically a chunk of memory in the kernel that describes the process and so it has things like what's its state what's its process id uh what are its current registers and so on if it only has one thread in it a list of open files we've been talking a lot about file descriptors and so on um okay and so these are these are all of the descriptors inside the kernel describing a process the scheduler is uh going to maintain a data structure uh containing all of these process control blocks and it's going to decide for each process and each thread within each process who gets the cpu and we're going to have a whole lecture and plus on different schedulers so the question of who gets the next uh little slice of cpu is extremely interesting policy decision but that's for another day okay um and the scheduler is going to be um also giving out potentially non-uh cpu resources like memory io etc so the program counter of course is pointing at where in the code that particular thread is currently running okay so what does it mean to switch from one process to another so uh what it really means is here process zero is running and by the way now we're talking about a single threaded process so there's one thread so um that processes main thread is running and at some point an interrupt happens okay and that saves all of the state such as the registers and the program counter and the stack pointer and all that sort of stuff into the process control block for xero and then it loads everything from process control block for one and then it uh returns to user level so what's in the middle here is kernel level what's on the outside is user level and what's in blue is actual code execution okay and um so what's in the middle here is all running at kernel level and at uh high high privilege okay so this is privilege level zero for system privilege level three for user and uh as you may be well aware that for x86 there's actually four privilege levels but you typically only use zero and three the other thing i wanted to show you that's interesting here perhaps is if we do this switching too rapidly then um what we're going to get is all of uh all overhead and no execution and so this part of the blue where it's actually executing user instructions will become a vanishing fraction of the total execution and uh that's a form of thrashing if you go back and forth too fast and you end up with making no actual forward progress okay so the question about what the other two levels are they are dif they're called rings and um they're in certain military specs you have different uh things that are somewhere between uh system level and user level you can use those other two um sometimes it's um sometimes it's utilized for the hypervisor in some early versions of things where level zero is actually the hypervisor and one is the kernel level and so on so for now however just imagine there's two because we've only been talking about kernel and user now the question here about more time used uh to waiting than executing uh this is clearly not to scale so this would be a very bad design if we ended up with a vanishingly small fraction of the time was actually executing real stuff so what we want is we want to get to a something under 10 or better of the time overhead so that we're you know using most of our cycles for something useful uh even though we all know that the operating system's the most interesting part of this uh it's probably would be good to actually execute some of your real programs so the other thing i will point out is there are a bunch of transitions so if you notice we go from executing process zero we transition into the kernel that's that little yellow dot we exit uh from kernel back to process and so on and so these transitions which are transitions in privilege level represent uh potentially expensive saving and restoring of registers okay and in this case this uh entry into the kernel is coming from an interrupt or could be a yield system call we'll talk about both of those as we go on now a process goes through a bunch of stages as does a thread okay and so um for now i'm not even going to say which of which this represents because it represents all of them and uh processes and threads uh both have their thread component but um as far as the process let's just talk processes for a moment the process starts from a new uh state and that's going to be right after we execute fork and set the process up and then we put it on some scheduling uh queue which is the point at which we admit it to say the ready queue okay and so that point it's now ready so what that means is not that this process the thread of this process is actually running but rather that it's ready to run okay and as if you think about it if you only have one cpu or one core then um there could only be one thing actually running at a time everything else is ready okay and then at some point we uh the scheduler pulls it off the ready queue and it now becomes running okay and now if there's only one core there can only be one thing running at a time later an interrupt might happen which brings that original thread back onto the ready queue some other thread will have been brought into the running state as a result but we're only tracking sort of one processor thread at this time okay and then this will go on for a while a lot of d isn't this animation great we're going back and forth at some point the thread or process will try to do some io or do something that's going to require a weight like for instance a disk access how many instructions does it take to do a typical disk access what was that order of magnitude everybody's supposed to remember okay a million yep all right and so a million at least cycles means that when we're in the waiting state here where we're on a queue waiting to get serviced with our io there better be something else running so part of what we're doing is we're attempting to um talk about how we can make uh something that's executing put to sleep long enough that we can run other things in place and overlap io and execute overlap the i o and the execution computation the question here about ssd so ssds are smaller okay it's not a million but it's probably 10 000 or 100 000 it's still going to be big enough that you're going to want to be put on a weight cube if we have more than one core that's a good question uh there can be more than one thing running and so the scheduler now has multiple run cues as well as multiple ready cues to worry about okay you'll never have a single thread run on more than one processor at a time because a fret only has one stack and so if you were trying to run it on multiple processors at a time you'd get chaos so ultimately the i o completes we get back to the ready state and we continue our running okay and then finally we will execute exit if you remember and that will put us in a terminated state okay which is a point at which the process is no longer available to run under any circumstances it's terminated okay and can anybody think why we might not just uh put free the process up why might we keep it in a terminated state laying around can anybody guess yeah great because the parent needs to get the result okay and so um this is actually when it's in this state where it's terminated but not uh reallocated yet that's typically called a zombie state so that's a zombie process okay all right now uh if you if you look uh inside the kernel cues we have the ready q and the cpu as the run queue but there are many other cues and so typically what happens is the process control blocks work their way from the ready queue to the cpu okay and potentially back again so if the time slice expires meaning that the amount of time it's supposed to run expires it gets put back on the ready queue if it does an io request gets put on an ioq until the i o is done et cetera okay and so scheduling is uh the the act of deciding sort of which thing off the ready queue gets the cpu next okay there's also a type of scheduling in something like a disk uh drive device drive or whatever which we'll talk about uh later in the term which will decide kind of which request gets to go next and that's usually there to try to optimize things like um the uh the disc head not moving as much and so on so there's lots of different types of scheduling for now the type of scheduling we're talking about is really when you have a bunch of things on the ready queue which one gets the cpu next so the ready q and all the i o device cues really are things that represent on non-running processes okay so um and there's a good question here so when you uh when you have a fork operation what happens well you only have one cpu therefore the child process needs to be put on a queue somewhere which gets it back in the ready queue okay because you can only have one thing running at a time this diagram is a little bit confusing but really what you want to think about is when you go through fork um potentially the child gets the cpu and the parent goes on the ready queue or vice versa depending on the policy okay and you're and you should never assume one or the other gets to run first because they're uh they're completely independent once fork executes um and i i understand that diagram is a little bit confusing on that front but just you know you can only have one thing running and so there's really going to be one of them on is on the cpu and the others on the ready cube so um you can imagine a bunch of these cues and they really all represent temporarily suspended threads okay and um these temporarily suspended threads hold their cues of um of pcbs okay and those cues you know are linked lists of things and i'm uh just because i'm an electrical engineer i'm using a little ground symbol here for null but uh you guys can give me a little bit of uh of slack there but you know we have lots of different cues in the system and they all have different uh suspended processes in them and um the scheduler is potentially only interacting with the ready queue the rest of the queues are actually being interacted with through the device driver and we'll we'll get into that um not too long in a couple of lectures but the device driver for the disk for instance when a when a request comes back potentially will remove a process control block from uh its weight cue and put it back on the ready queue and so now it's runnable again okay and that scheduler is is this simple loop that is just uh many options here okay we have at least a lecture and a half on scheduling because surprisingly enough that simple question of on the ready queue who gets to go next vastly impacts whether the thing is responsive if you've got a person typing or it's efficient if you've got a long running task or if it's fair if you've got multiple things running at the same time or it if it's a real-time scheduler because you've got a car and it's uh between pushing the brake and the brake engaging you know maybe there's a question of whether it's timely okay these are all scheduling policy questions which are going to be quite interesting for us but that's for another day the loop i wanted to show you here which i've shown you before is it's this mechanism where if there are any ready processes okay then it'll pick one and it'll pick one according to a policy so again if there's a cue for each device has a queue of processes that are waiting on it and so what happens when a request comes back from a disk then it finds that process that's been waiting there and it reactivates it and actually in something like linux where the threads are are kernel threads and you'll see a little bit of that distinction in a in a bit then in fact the queues are actually holding thread control blocks not necessarily processes so they the granularity of threads being able to be put to sleep on these queues is really what happens in systems where there's a one-to-one mapping between user thread and the kernel stack or thread okay all right many different scheduling policies so let's dive in a little further so when we were talking about processes originally we mentioned the fact that there can be many threads inside a process okay and each of these threads has a stack and some registers so for now we're going to talk about threads and how they're implemented and when it matters whether i'm talk whether i need to talk about process or not i will bring it back but just to remind you guys what's a process the process is a protected environment such as the memory uh space and file descriptors and all that stuff we've been talking about plus one or more threads okay and though each of those threads has a thread control block with registers and a stack in it okay and so when we need to talk about well we're switching the protection environment from process one to process two i'll make sure you know that i'm talking about that but for now we're gonna dive into just the concurrency portion or the threads okay so threads encapsulate concurrency or the active component the address spaces etc are the passive part and that's inside the uh the shell of the process uh why have multiple threads per address space for sharing okay now if you remember this is the uh the shared state now we're within a process all of the threads in the single process they share the heap they share global variables they share code um as we'll mention some of the important global variables those threads probably are going to share are locks et cetera we'll get to that as the second half of the lecture and then each thread has a thread control block that has information about where its stack is what its registers are metadata of various sorts and a stack in memory okay so that's per thread and so every one of the threads has that information okay and if you have too many threads then you can run out of space in your process well the reason we haven't talked about and there was a question here what about virtual address translation so they uh i wanted you to get a general idea about that that's going to require us to talk a few lectures to really get into it and so that's why i'm not trying i'm trying not to muddy the waters too much so we're we're working on we're working on the concurrency part today don't worry we'll get there you guys are going to be uh really deep operating system designers by the end of this class so the core of concurrency as we've kind of mentioned is this this scheduler loop or i'm going to call it the dispatch loop here and conceptually the operating itself operating system itself is this an infinite loop where we run a thread we choose the next thread we stay we save the current threads state we load the new thread state and we just keep looping forever okay this is an infinite loop and uh i suppose under a certain point of view this is all the operating system does it just keeps looping uh letting threads run until uh they're either they yield the processor or they're interrupted and then we pick another one and we go okay so um pretty simplistic uh and now we're done we'll have our final uh next week and we'll be good right so there's the whole operating system um but perhaps we'll do a few more details just because we can one question might be should we ever exit this loop uh what are some good reasons to exit this loop anybody okay well interrupts don't necessarily well interrupts might be kind of like a bubble but they don't interrupt the loop because the interrupt happens and then it comes back yeah shutting down the machine pg e yes power outages hopefully we're not going to get too many of those this season but i'm thinking we might have power shutdowns but yes so basically when the machine exits or it panics or any other sort of crashes uh you exit the loop but by and large we just keep it going okay so what we're going to do is we're going to briefly talk administrivia and then we're gonna look more at how this all works okay homework one's due today as many of you are aware i appreciate very much that you guys are here for class thank you um it's great to actually have people to ask questions um project one is in full swing and i saw an interesting query on piazza that was kind of like well how can i do my design document if it wants code but i don't know how to do the project yet okay it seems almost like that's some catch-22 that catch-22 and the answer is that what we're looking for in your design document is a notion that you have read through enough of the code that you have an idea of roughly what you're going to need to do you're not going to have it all done because that's what the code deadlines are for but um try to give us some intuitions it could be pseudo code you could pick out a couple of function calls you know they're going to be important you could pop up you could say well here's a data structure we're going to add these fields to it whatever those are those are not the same as we wrote a bunch of code and it works okay and so what we're looking for in your design document is a high level idea of what you're planning to do and why and supplemented with some code uh pseudocode if you like that tells us some details of where you're going and helps your ta understand what you're thinking okay so that's the paradox you don't need fully working code to write a design document that would that would be pretty strange right um the uh the if deaf user prog basically says whether there are uh user programs uh are supported or not you can you can have a kernel-only version so um we should you should be attending your uh permanent discussion session um remember to turn your camera on and zoom uh and discussion sessions are mandatory so we're taking attendance uh the question is will the design document be graded and the answer is yes and you're trying to give us a understanding of your thinking in the design document and we will be grading uh the ideas that are there and then with your ta in your design review session uh you'll be talking to them and they may be giving you a few suggestions of other things to think so there is some you know the design will evolve possibly over the course of the project that's certainly accepted okay the problem with uh example design docs of course is that they sort of have answers in them i'll see if i can find one for you but just think about um you're trying to give a high-level viewpoint to your uh manager who's your ta right and you're trying to give them an idea that you've thought through enough about what you need to do that you're on a good path all right the other thing is of course midterm one is coming up a week from tomorrow or two weeks from tomorrow not a week from now um and uh it's gonna be video proctored i understand there's a little concern about how 61 c's video proctoring went um believe me we're uh well aware of everything that's been going on in the department so uh we will try to avoid the mistakes of the past at least learn from them uh let's see i think that's all the administrivia i'm not entirely sure what happened i think that they were requiring people to record things locally and there were some issues with that under some circumstances that's not our current plan so we'll get that out to you all right good any questions on administrivia all right so let's talk about running a thread so what do we get when we run thread how do you run a thread well you loaded state into the actual cpu registers program counter stack pointer okay if you're changing process you need to load its environment so that means get the page table set up that's that mysterious virtual memory we haven't talked a lot about yet uh get the you know get anything else loaded up and then you just jump to the bc and start running so one thing that's going to be interesting here i uh for you guys is that both this the uh os which is managing threads and the thread themselves run on the same cpu so when the os is running the thread isn't and when the thread's running the os isn't and we need to make sure that we can transition properly between those so this idea that the os loads up a bunch of stuff and then jumps to the pc means essentially that the os gives up control of the cpu all right and um you know we're going to have to deal with that right if you give up control to a user program that then proceeds to go into an infinite loop clearly we're going to need to get that back somehow okay and so that's a question how do you get it back now i've been playing with computers long enough that i got to play with some of the early versions of microsoft windows like 3-1 um some of the early macintosh's and and other pc environments and in those pc environments what happened was the the multiple things that were running were fully cooperative so let's suppose that you had three applications running on your screen and they had three windows and one of the applications crashed excuse me what would happen is the system would freeze okay nothing would move you would have no control of the windows in the other applications either and the reason for that is that one application which crashed and maybe went into an infinite loop kept control of the processor okay so fortunately modern operating systems are not like that because we have memory protection which is an important thing but we also have things like preemption possibilities through interrupts which is going to be an important thing to talk about here but even back in the day you could have the illusion that multiple things were working you could have many windows all drawn stuff simultaneously representing different applications and the way that worked is each of those threads would run for a while and then it would voluntarily give up the cpu by calling a yield function back into the kernel okay and assuming that all of the applications cooperate it cooperated this worked fine it was when they didn't cooperate or forget not cooperating when they had a bug uh that was a problem okay so the mac also was this way okay this is back in the dark ages in the early times okay in the original macintosh's so let's talk about um internal events first okay internal events are times when where uh everybody's cooperating and they're voluntarily giving up the cpu so a good example of this is blocking on io when you make a system call and you ask the operating system to do a read you're giving up the cpu uh and therefore you know you're implicitly yielding it and there's well the operating systems working on your task by say talking to a disk for a million instructions it can schedule somebody else so surprisingly blocking on io is uh a great example of yielding okay saying that you want to wait for a different thread or a different process say with a signal operation that's another example of voluntarily giving up the cpu because you're saying well i have to wait so go ahead you run that other thing for a while because it doesn't do me any good to be in a loop waiting okay the third thing which is sort of a an example to follow for all of these things is what i'll call a yield operation and it's actually a system call type uh thing which is basically let's suppose that i wanted to run a an application that computed pi to the last digit okay well what i would do is i'd have a a while loop okay that's never going to exit because this uh pi is uh very long right and i compute the next digit and then i yield and then i go over and over and over again okay and i also see in the chat mining bitcoin potentially okay so these are very long-running things where what i've done is i've decided to execute a yield system call regularly enough that uh multiplexing works and and the the system acts properly like it's got multiple threads running at the same time okay now of course this particular uh application i'm showing on the screen is flawed for uh for a pretty um important reason here does anybody know why this is is not a great example of of yielding regularly well it yields too often maybe initially does anybody know anything about computing pi so the point here is that each digit you compute takes longer and longer and longer and longer so while this particular thing seems to be yielding properly at the beginning the yield operations are going to come at a longer and longer interval and eventually uh it'll be effectively like this thing is just acting forever okay so this particular use of yield is probably not a great example but assuming we yield regularly then we properly multiplex things and we are actually getting multi-processing okay so there is actually if you remember i gave you the posix api for threads in an earlier lecture p thread create b thread exit p3 join there's actually a p thread yield although if you do a man on it you'll see that that's considered not supported on all operating systems but there's also a schedule which is a similar thing and what this does is this actually says i yield the cpu so that another thread can run okay so this is a real interface all right and so what we're going to do is we're going to say we're going to take a look at what is yield by us here okay and and uh once we've got yield figured out then uh we'll graduate to a few other interesting ways to get the threads to give it up the processor okay so let's look at this compute pi uh function that i showed you earlier so we compute a digit we enter compute pi and it computes a digit comes back and it executes yield all right now this is the stack so remember how stacks kind of grow down and come back up all right yep sleep would be a type of yield as well so if you um the compute pi uh stack frame starts at the top we execute yield now if you let me just show you back here if you take a look notice what's happening in this while true we enter the compute pi function so that's the first stack frame and then we run compute next digit and come back and then we run yield so yield is going to have a stack frame that's just below pi the compute pi the way we've set this up okay so that's what we're showing here and in this uh instance here blues is going to be the user code okay so we have compute pi stack frame we have yield yield is going to execute a system call all right which means that we transition into the current the kernel with a system call and at that point we actually change stacks so while we have uh the user stack in the blue area we end up in a kernel stack in the red area and there's a one-to-one correspondence between a user level stack and a kernel level stack okay and so we execute yield which is then going to execute run new thread which is going to execute a switch operation okay and so we're going to go through several levels here where yield calls the run new thread operation saying i got to pick a new thread which is going to call switch and we're going to find out what switch is about but let's start for a moment with understanding why do i have blue and red here okay this is not a political statement why is there a difference between the user level stack and the kernel stack okay so one to one uh one to one means that for every user level thread and stack there is a kernel level stack and i'll show you this next time when we really dive into uh real code okay but for now there's a one-to-one uh kernel stack specially allocated for this thread can anybody tell me why when i change modes by going into the kernel i use the kernel's thread excuse me i use the kernel stack rather than the user's stack safeguard great because we don't trust the user ever if we're a kernel the most important thing that you need to do when you're the kernel is when you when uh you get a system call comes in from the user you check what the user gave you to make sure it's okay and then the second most important thing is you check what the user gave you to make sure it's okay and can you imagine what the third thing is you check what the user gave you to make sure it's okay and then you actually execute things so this is an important state here because if the user code were to um you know put a null or something in its stack pointer and then execute a system call the kernel is going to panic or do something because it's not going to have a valid thread so part of this transition from uh from user mode to kernel mode has to change the stack okay so here's what's going to happen now this is the running the new thread so we hit kernel yield calls run new thread and notice what run new thread is it picks the next thread to run okay that's a scheduling type operation and then it executes switch okay and that switch operation is going to somehow switch to a different thread okay and then we're going to do some housekeeping which might be cleaning things up seeing how much cpu time we're using et cetera so how does the dispatcher switch to a new thread well we've kind of gotten an idea about that a little bit earlier which is we're going to save anything that the next thread may trash right we got to save the program counter and the registers and the stack pointer of this blue thing because we need to restore it later so we can keep computing pi which is very important right pi is the important number here in this class so um and then we want to make sure we maintain isolation okay between threads now before you say well wait a minute i thought threads were sharing in processes right now remember our threads and our processes were intentionally uh not distinguishing for them so we want to make sure that when you switch to another thread we have to make sure that we don't trash this current threads stack and if it turns out that we're going to a different process we have to make sure that we change the memory protection as well okay so how does that switch look okay well let's look at the stacks for a moment let's let's assume that what switch is going to do i'll show you some actual assembly like code in a moment but what switch is going to do is it's going to save out everything from thread a and load everything from thread b back in okay so how to think about this i'm going to show you a really silly piece of code here but this is going to help us okay so this code starts with uh function a which is going to call b and then once we get in b it's just going to go into an infinite loop that does yield yield yield yield okay and if you can imagine what that means it means that yield is going to give the cpu up to somebody and then when we come back we execute long enough to go into the loop and then we're going to yield again okay and suppose we've got two threads s and t both running exactly the same code so what happens well thread s a is at the top of the stack it enters b which is executing the while loop which calls yield which calls run new thread which calls switch and switch is going to switch to the other thread okay and then that switch is going to return to run new thread which is going to return from system call to yield which is going to return to the while which is going to call yield which is going to call go into the kernel and call run new thread and call it switch which is going to switch back the other way and then we're going to come back okay so this particular example where there's only two threads in the system and they're both running exactly the same code what's going to happen is we're going to kind of go down the stack for s we're going to then switch over to t and come back to stack and then we're going to go down the stack for t switch come back for s and what's interesting about this is what is this switch routine okay the switch routine is really simple okay and this is uh mips code but it's gonna be very similar to what you got for risk five that you guys are all familiar with but we're gonna save all of the registers of the cpu into the thread control block we're going to save the stack pointer we're going to save the return pc all of that stuff so this green thread control block is the one that we were running and now we're done with it and now we're going to load back the red one and then when we're done we're returning so this although this is written in assembly language and i'm going to say sort of assembly language um pseudocode notice that switch is a is a routine so we call the function switch and it returns down here back to wherever we came from so that uh well so here's here's the thing that i think should be interesting so if we get in switch let's suppose first of all let me answer the question that's kind of on the group chat here which is when you switch to a new thread why are we reading the stack bottom up and not top down again the answer is we're returning okay so forget this somehow getting from s to t let's let's suspend that complexity in our mind if we were to just have one thread in the whole system we would call a would call b would call yield which would go into the kernel run new thread which would call switch and what does switch do switch when it's done returns see the return down here and so what does return do well return pops something off the stack right and run new thread is a function which will pop something off the stack which will return back to user code and and yield will thereby return and then we'll go back again to yield and then we'll go back up and down if there were only one thread in the system where we the stack grows as we call forward and as we do returns the stack shrinks yeah okay now however this good i'm glad you guys got that now the question though is how does this work going back and forth okay why does that happen and the answer is when we get into switch on the left let me go back this way we save out all of thread s's registers and then we load in all of thread t's registers including its stack pointer which means really after we gotten to the bottom of the switch routine before we hit return we're actually over here because we're on a different stack so when we return after we execute switch it takes us back up here and then when we come down and we switch we return back up over here okay so take a second to pau to understand that see this back and forth okay and it's all in here because when we it'll never hit a again that's correct because there's an infinite loop but if you so you see there's an infinite loop here so a is never going to come back because b just stays in the loop forever but if you notice what's going on here is when we change the stack pointer to the thread t's stack let's say when we do this return even though we started with thread s's stack by the time we get down here we're on thread t stack so when we do a return and we have thread t's return pc we're actually returning back into thread t not into thread s and vice versa okay and that's why we go back and forth okay now i'm going to let that marinate for you guys a little bit and we're going to explore this a little bit okay but oh good other questions the other question is after you switch does the kernel stacks thread not match the user stacks thread the answer is they still match because the the way that the user stack and the kernel stack are associated with each other is the state in the this thread the red thread for t remembers which thread uh which stack it came from so when when we're in thread t's kernel stack it has associated with thread t's user stack okay so the matching up happens all the way from the kernel back up through thread t as well okay so in some sense you could say that if i were to take this s when it's suspended because i'm running in t and i were to disconnect this stack and thread control block and put it on some weight queue so that there isn't uh so the s is not on the ready queue and so that the scheduler never gets it then t will never go to s again it'll just go to other things maybe it goes to u v w whatever are running but s is happily suspended in some weight q and the moment i put s back on the ready q then this behavior will start happening again and we can run s again okay so the thread is a complete self-contained snapshot of a running state which is a thread control block and two stacks and you can put it away and you can come back later and make it runnable okay so this is kind of the key idea that we've got so far okay so some details about that switch routine by the way uh so now what we've said essentially is um the pc is saved by the way uh in in all of this it's what it's one of the um it's one of the the registers that are saved okay i'm not actually showing you here but it's it's one of the pc is certainly saved okay now um so what we just said is the tcb plus the stacks contain a complete restartable state of the thread uh you can put it anywhere for revival so here's a question for you what if you screw up switch okay this is like at the core of the core of the core of the core of the scheduler inside the kernel okay well let's say you forgot to restore register 32 or something so what's really bad about this is um you get intermediate intermittent failures depending on whether the user code was actually using uh register 32 or not okay and the system will get the wrong results without warning okay let's hold off starting for a moment i know people are wondering how that got started let's just say for now the system has s t running and this is just happening okay we haven't started anything yet we've popped into a running state okay so hold your suspend your question on that for just a second so uh switch is extremely important and the question might be is there an exhaustive test that you could run of the switch code the answer is no okay you're gonna you're gonna have to look at that code and then get other people to look at that code and then look at that code again over and over again and you know it's not very long so there's you know it's not gonna change much and it's not gonna be too complicated but you to be careful because if that's wrong the whole operating system is going to be behaving weirdly and you're not going to understand why okay there's a cautionary tale here i like to tell sometimes which is for speed there's a kernel from digital equipment corporations one of their research labs called topaz and this was back in the days where memory was uh very scarce and so some very clever programmer decided to save an instruction and switch that worked fine as long as the kernel wasn't bigger than a megabyte now i realize those numbers seem ridiculous to you today but let's assume for a moment that a megabyte was a lot a lot of memory at one point okay and as long as the kernel size was less than a megabyte or 20 bits uh an address then um this would be fine and it was carefully documented and it saved an instruction so it was faster what is their motivation well the core of switch is used by every switch and so it's part of overhead so it made sense let's make it smaller the problem is and they documented it it was great except time passed and people forgot and you know the clever person maybe retired and later what happened is people started adding features to the kernel because they were getting excited about putting stuff in the kernel and it got bigger than a megabyte okay and once it got bigger than a megabyte suddenly very weird behavior started okay and yeah i suppose one more of the story could be don't document i don't i don't want to say that that came out of this lecture but the moral of the story is be sure that you design for simplicity and if you're going to um if you're going to make some micro optimization you better make sure it's really worth it okay all right hashtag read the docs okay the instruction would save kind of the higher part of the bits of an address of the kernel okay so aren't we switching context here um with the with the threads we've been talking about well assuming we're not changing the um yeah you're asking about build scripts things weren't quite so sophisticated back then so um if we're switching just threads okay this is very sophistic this is very fast so what i've shown you is the thread switching portion now if we need to switch between processes we're going to have to start switching address spaces and i wanted to give you just a little bit of an idea here so the frequency of the context switch in typical operating system like linux is somewhere in the 10 to 100 millisecond time okay the overheads about three or four microseconds so you can kind of see where this goes all right this is um you know in the in in the small range here okay um now switching between threads was much faster in 100 nanosecond range okay so there's a you know there's a thousand microseconds in a millisecond okay and a thousand nanoseconds in a microsecond so you can kind of see where these numbers come into play and so the key here is keeping the overheads low and so switching between threads within a process is fast we're switching between uh processes takes longer and this extra time is really um all you know this is 30 or 40 times uh cost is really about things like saving the process state and so on okay now even cheaper rather than switching threads by going into the kernel and coming back would be to run threads in user space now i know there were some questions about this at one point but let's be a little clear here for a moment what we've been talking about and what the default thing in linux is these days is a one-to-one threading model where every user thread has what's called a kernel thread okay and i'm going to use this terminology and you're going to take a little time to get used to it but a kernel thread is really um a kernel stack that's one-to-one uh matched up with a user thread such that the user's stack gets switched out and the kernel stack is used when we're in the kernel and then when we return to the user we use the user's stack but the kernel stack is always there suspended so if i have four threads i have four kernel stacks inside the kernel matched up with user threads okay this is exactly what we've been talking about and this is what pintos does for you and this is what the basic linux model is but we can be faster so for instance we could do this where each kernel thread where there's a kernel stack has user threads associated with it more than one and what we do when a user thread executes yield is it's a user level yield where the user code library looks knows how to do that same stack switching i just showed you but it saves and restores uh registers between threads without ever going into the kernel so we can make the um we can do this user multiplexing very fast okay and if you were to google green threads for instance this was done a lot in the early days when uh going into the kernel was more expensive okay but you can do this with a thread library at threading library a lot of early versions of java were like this where the threads actually all operated up here but not in the kernel okay now the good thing about the left model all right is if a user thread does a a particular user thread does io which puts it to sleep this kernel thread gets put off on the sleep queue for that i o device but the rest of them are still running so they're still getting cpu time that's good here in this model the many to one model we have multiple threads and if any one of them goes to into the kernel and goes to sleep on io all of the threads are suspended because nothing can run okay so while the user thread model is very fast it doesn't interact with sleeping in the kernel well and so that's why there's also a many-to-many model where you have a small number of kernel threads and many more user threads okay and that's got special library support and don't worry about it you as a user a programmer would just see a bunch of threads and you wouldn't your lot the library would hide this from you okay but today we're talking about the thing on the left for the lecture okay all right now um so just to show you a little bit now our model has a cpu potentially one cpu each process may have multiple threads there might be multiple processes and so basically the switch overhead between the same process is low because it's easy to switch threads between different processes is higher we saw that factor of 30 or 40. okay the protection between threads in a process is low that's by design they can share memory with each other between different processes it's high that's also by design to protect processes from one another the overhead of sharing is low inside a process because threads can just share memory and between processes you got to do ipc to figure that out and there's no parallelism only concurrency so in this instance there really is only one thing actually running at a time now of course we all know about multiple cores so we can actually introduce parallelism in here and what happens is the top part of this model doesn't look much different but now we have three four however many uh cores that are executing there can be 28 in some instances 54 whatever and in that instance now we start having some questions can we um you know the switching overhead might be similar but now if uh we have different processes but the same cores are running at the same time then that's medium overhead to communicate uh as opposed to if you're trying to communicate with a process that's completely asleep because it's not running on any core that's higher and yes there's parallelism here okay so this is an instance where concurrency which is really the thing we worry about gets translated into parallelism okay and i did want to say one quick thing about simultaneous multi-threading or what's called hyper threading by intel because they never want to take somebody else's name for something but we can imagine if we had a lot of transistors on a chip that we could put them together and allow multiple operations to run simultaneously so think about time goes down in these figures and each line here represents a cycle and so what you see here is there are three functional units in the case of the super scalar the ones that are solid yellow are actually doing something and so we're getting um some parallelism here like for instance this is getting three things happening at once kind of in the middle where there's three yellows in a row we could get a multi-core by putting several of these together okay so in fact in this middle thing we now have two multi-cores that are the same as this one core on the left and then hyper threading is a little different in that you can have two threads that get interleaved on the same core and so now rather than these empty spots like these uh gray parts we actually fill in green and yellow and so we use much closer to 100 of the pipeline okay that's called multi-threading uh simultaneous multi-threading or hyper threading okay and this thing on the on the right is a much more efficient use of hardware and a lot of intel processes and amd processors so on have hyper threading and you get definite speed up because you're using more slots here okay and this original technique was called simultaneous multi-threading you guys can take a look but in this instance now you'd actually have multiple threads running simultaneously on the single core okay whereas in the middle one you could have multiple threads on two cores do gpus have hyper threading so gpus don't really quite have hyper threading in the way you're thinking gpus are usually designed as a single a single task takes over the whole gpu okay hyper threading shouldn't affect locking because if you've got a good good code that's will work under all circumstances of concurrency and parallelism it shouldn't matter now so what happens when the thread blocks on i o okay um hyper threading is parallel because there's two actual threads and they are running simultaneously oops i just lost my place here hold on a sec my bad sorry about that guys hold on a second we put the screen back so now um the question is let's let's uh let's uh move forward i want to try to catch a couple of things before we we want to get into some synchronization here but so what happens if we block an io so here's a different process that's actually copying uh from one file descriptor to another so you open one for reading and the other one for writing we actually showed you that code a couple of lectures ago and so now it executes a read system call what happens well we take uh a system call into the kernel okay um you know that's a read system call and the read operation is initiated and at that point we go in to the kernel we switch to the kernel stack all right and uh we will initiate maybe the device driver on the disk to go off and read and what happens then well we run new thread and switch so notice that uh we can set this up so that little bouncing back and forth between s and t works perfectly well if the thing instead of the executing yield does a read operation okay works perfectly well okay thread communication so waiting for signals or joins or networking over sockets all of that stuff has a similar behavior so that's why this this uh particular paradigm of the two stacks um which you can put on any sort of suspend cue plus you can put it back in the ready queue works very well for scheduling okay but what happens if the thread never does i o so now we want to we want to somehow progress beyond the early days of windows 3 1 and macintosh and so you know the compute pi program could grab all the resources okay and if it never printed the console and ever did i o never ran yield we would crash the system okay and so there's got to be some way to come back and the answer here is external events so the particular one uh there are a couple of them one is interrupts okay signals from hardware software that stop the running code and the timer like an alarm clock that goes off uh every off so often okay both of these are interrupts from the hardware that that caused the user code to enter into the kernel even if it wasn't going to do that okay and if we make sure the external events occur frequently enough then we get fair sharing of the cpu as well so if you take a look here i just wanted to say a little bit about interrupts so a typical cpu has a bunch of devices that are all connected via interrupt lines to an interrupt controller and that interrupt controller uh goes through an interrupt mask which lets us disable interrupts and then that goes through an encoder and tells the cpu to stop what it's doing to handle an interrupt so for instance if something comes off the network that'll generate an interrupt which will interrupt the cpu and the cpu will go off and do network interrupt okay so interrupts are invoked with interrupt lines from devices the controller chooses which interrupt requests to honor okay and the operating system can mask out ones that it's currently dealing with um there's a priority encoder that lets us pick the highest priority ones and uh that whole interrupt core of the operating system we'll get into a little more detail when we get into devices but i'll point out a couple of things so the cpu can disable all uh interrupts typically with a single bit when it's processing one interrupt okay and it can change this interrupt mask to change the uh which devices it's willing to listen to okay and there's also a non-maskable interrupt typically which is something which might get triggered when uh say power was about to go out and there's no way for the cpu to disable that okay that's kind of the oh my gosh hurry up and do something quickly um each cpu has its own interrupt controller that's correct okay and uh the question about what do we do to prevent threads from getting interrupted by uh other cpus is an interesting one we'll get into we'll get into disabling of interrupts in the next uh in the next lecture um the kernel stack is in kernel memory that's correct and uh it's not and when you're at user level you can't access that kernel stack otherwise that would defeat the whole por purpose i'll show you that next time too so an example of a network interrupt we're running some code here you know in assembly whatever the interrupt happens typically the pipeline gets flushed the program counter is saved the interrupts are saved we go into kernel mode which does some manipulations of masks and saves interrupts and so on and will re-enable interrupts we'll talk more about that for all things except what i'm handling typically we go ahead and actually handle the interrupt itself like grabbing the network packet and then we restore and return back from interrupt and at that point we can pick up so this thing on the left that we've flushed we've interrupted and restarted his user code and the interrupt is able to stop the user code long enough to service the request and come back okay and i realize there's a lot of pieces to this we'll talk more about them later but an interrupt is a hardware invoked contact switch so when we had our worry about the fact that uh perhaps user code could hold on to the processor well if we have an interrupt that occurs regularly enough we can switch and we'll do the trick and that trick is uh typical uh pcs have a timer okay many timers in some instances which are sources of interrupts and we just programmed the timer to go off every 100 or 10 10 to 100 milliseconds and that will make sure that we're able to context switch okay and so that instance looks just like this we're busy running code and the interrupt takes us into the kernel all right this is not a yield arc this is not a system call arc what took us into the kernel was the interrupt itself but that interrupt stack can be made to look identical and then we just run new thread and we switch okay all right and um is there protection against a malicious device constantly making interrupts uh depends on the circumstances okay if you have a malicious device that's and it's attached to the hardware then under uh bad circumstances that can be very bad so hopefully that doesn't happen okay so how do we initialize the tcp in stack while we initialize the register fields of the thread control block stack pointer is made to point at the stack all right and uh we set things up with what we'll call a threadroot stub which we don't even have to initialize the stack but we're going to set it up to look like it's been running okay and what we're going to do is we're going to put that on the ready queue so that if we switch to it what it does is it returns from switch by uh loading the return address and a couple of registers and as a result it's going to start executing just like if it had been running for a long time and executed switch so that's this idea this new threadroot stub has been set up as an environment with a new stacks and we've just set up the right registers so that we can fake it out to look exactly like we're running something else that called switch so this has got a state that looks like switch all right and the mo and so what does that setting up new thread do well it sets up the stack pointers it sets up a pointer to some code that needs to run and some function pointers and then we switch to it and it runs okay now this is going to depend heavily on what calling convention we are so i'm showing you something that looks like a mips or a risk five if you've got an x86 you have to do a little bit more with the stack to set it up but the bottom line here is we're setting this up to look like we've switched to it so that if we switch to it it'll just start running okay and what does it look like well the thread root does some housekeeping it switches into user mode and it calls a function pointer okay and if you look here threadroot calls the thread code and it starts growing and all of a sudden we've got a thread that's running and this is exactly the way that s and t were started in that previous slide okay now the question here about what happens if the user thread goes into an infinite loop the answer is well because we have a timer going off what's going to happen is it's going to waste its own cpu time but others will get to run in particular there could be somebody who comes in and kills it off and they have enough cpu to actually run say the shell or whatever okay all right now let's talk now about correctness okay and and hopefully you guys can bear with me a little bit um but now that we've got concurrent threads and we have a beginnings of an inkling about how to make sure they all run all the time and we have an idea that if we were to disable interrupts we might actually prevent things from switching that's going to be very important next lecture we can start talking about how do we make multi-threaded or multiple process code work and the problem is this non-determinism factor schedule can run threads in any order switch at any time and if the threads are independent that's okay but if they're cooperating on shared data we've got a mess and multiple threads inside of a single processes are likely to be collaborating together and then we may have a mess okay and the goal here is how do we correctly design things so they work by design regardless of what the scheduler does to us and i like to think of this like the scheduler is a malicious uh murphy's law device whose sole job is to run your code in the order that exposes the worst concurrent bug and it's going to do it at the worst time okay so that's the murphy's law scheduler all schedulers are murphy's law schedulers and so our only defense is to design our code correctly so that it's not subject to the murphy's law scheduler okay now when a user thread switches there's a question in the chat here it's the kernel stack preserve now the one objection i would have to that question is there isn't one kernel stack i hope you see that there's many kernel stacks one for each thread okay and where they're preserved is they're on cues well empty space they're they're in places that are well associated with the current running thread okay so they're in registers associated with the operating system at that time all right so this is many possible executions of the murphy's law scheduler so here's an example of the bank server which i think i've mentioned before but i want to go into so we have an a we have many atms and a central bank and the question is suppose we want to implement a server process to handle requests for that well we might do something like this where the bank server grabs the next request processes it grabs the next request processes and it does this serially one at a time and what does process request do it figures out what you want to do and if you want to deposit potentially it gets your account information maybe using some disk io it adds to the balance uh let's say if you're depositing and then it stores the result possibly also using disk io and continues okay so more than one request being processed at once would seem like a good idea here but our naive way to do that why would we want to do that well at minimum we'd like to get our disk i o overlapped with computation okay so one option which i'm not going to go into right now because we're a little low on time but we could build an event i'll give you a very brief idea we could build an event-driven version of this where we take that original task and we split it into a lot of pieces that are guaranteed to run to completion without ever stopping so that would be for instance uh the the first piece would be all the way up to getting the account id and starting the disk io and then another piece would be after the disk io is done we would add to the balance and the next thing would be after we've done that we and we start our disk io when that returns that's another piece and so on so you pick these pieces between the disc i os that we know are going to run quickly and you build a dispatch loop like this where the next event which is like the end of a disk i o you figure out which thing you were working on and you do the next thing okay and that quickly ends and you put that back on the event queue and you keep doing this in a loop all right this event driven way of doing things is really crazy unless you've ever done programming for windowing systems and then this will look very familiar to you but i will tell you that while you can program this way it's very hard to get it right like you could forget an i o step like one of these start requests or continue requests might actually uh do an i o and you weren't ready for it which is why we like to have many threads okay so threads can make this easier so let's have one thread for every user in the system doing a request and so what's great about this is we could have many folks all running deposits and so you know their disk ios you know might stall one of those threads but another one would get to run because remember every thread has a kernel half and can be put to sleep so what's great is now we've got parallelism performance okay but this is not good so let's suppose you're depositing ten dollars and your parents are depositing a hundred dollars into your account at the same time okay i don't know how often that happens to you but let's suppose it happens uh frequently and here's you you're thread one and here's your parents thread two and you you load your balance your parents thread gets to run it loads the balance it adds 100 bucks and stores the balance back and then you get to run and you add 10 bucks and you store the balance back and if you look carefully at this what you see is how much did your account go up a hundred and ten dollars ten dollars okay i i can tell you you're not going to be happy about that your parents aren't either uh so we have a problem and this problem starts showing up the moment we have threads working on the same data okay concurrency so this problem is one of the lowest level problems so like if we have thread a and thread b a is throw storing to x and b is throwing to y normally that isn't problem have a problem okay but see this isn't even a i see somebody claiming that might be a robin hood thing the problem is the money just went poof nobody got it so that's just bad okay so this um here is an instance which is a little crazier right where um thread a is operating on some data including y and thread b's operating on y and suddenly we have a race condition and the question might be what are the possible values of x and they could vary quite widely okay um you could have x equal to one you know you could have x equal to three et cetera et cetera okay many options in here depending on how the threads are interleaved so that's not good or what about this thread a stores uh x equal one and b stores x equal two if we assume that loads and stores are atomic then x could be either one or two non-deterministically i suppose if you had some sort of weird serial processor you might even get three out of this where you know a's writing zero zero zero one and b is writing zero zero one zero and they get interleaved and you get three um that's one you don't have to worry about okay but we need atomic operations okay and so they understand a concurrent program you need to know the underlying individual operations and what they are so an atomic operation is an operation that always runs to completion or not at all and it's indivisible can't be stopped in the middle and the state can't be modified by somebody else in the middle and it's a fundamental building block okay if there are no atomic operations there's no way for threads to work together okay so notice that what we really wanted to happen back here in the bank case is we wanted this uh git account add to account store to account we wanted that to be atomic so that it couldn't be interleaved okay and so that's our atomic operation that we really want okay and on most machines memory loads and stores are atomic um the weird example that i gave you they gave three that's not that i have never seen that okay that's an amusing thing to think about okay but um things like double precision loads in stores aren't always atomic okay so if you have a floating point double and you're loading and storing it you could actually get half of the top half of one in the bottom half of another under some circumstances okay so you got to know what your atomic operations are and next time we're going to talk a lot about what the native atomic operations are over and above loads and stores uh which is going to be important because we're also going to show you that load and store atomic operations are not enough okay just not enough but let's hold that discussion off so if you remember what a lock is a lock prevents somebody from doing something so you lock before entering a critical section you unlock uh when you're done and you wait if the thing's already locked uh you wait for it to be unlocked and so the key idea here is that all synchronization in order to make something correct it always involves waiting so rather than running right away you wait so that the atomic sections don't get interleaved okay so waiting is actually a good thing here as long as you don't do it excessively okay and so typically as we mentioned several extras ago locks need to be allocated so it might be something like uh you know structure lock mylock and then you knit it or maybe p thread mutex mylock and you initialize it all the different systems have different ways of initializing the lock and then you typically have a choir which grabs a lock and release and they often take a pointer to the particular lock okay so how do we fix the banking problem well we put locks around our atomic section so we acquire the lock and we release the lock all right so this thing in the middle is what we call a critical section the critical section is the atomic operation that we've chosen that we only want one thread in at a time and the gatekeepers are going to be the acquire and release okay and so here's an example just to show you so if we have a bunch of threads or some animation right thread a b and c they all reach the acquire if we let them into that critical section more than one at a time we get chaos but the lock will actually pick one to let through and so now a gets to run and then when it exits and calls release then the next one gets to run so now b gets to go and then c gets to go etc okay so you in order to make this all work properly in a banking operation we must use the same lock with all the methods withdraw etc that are operating on the same data so part of this is now we have to analyze our problems properly okay so if you remember some definitions synchronization are using atomic operations to give us cooperation between threads so for now loads and stores are the only ones mutual exclusion is this idea of producing or preventing more than one thread from an area we're going to mutually exclude things so that only one thread gets to run and the thing we're excluding from is this critical section and so this at the simplest level this idea of uh figuring out how to fix a synchronization issue is doing an analysis of where do i need my critical sections what's my shared data and where are my locks okay now we're gonna get a lot more sophisticated in a bit okay um but another concurrent programming example might be two threads a and b are competing with each other a gets the run b gets to run okay so um what do we see here well assume that memory loads and stores are atomic but incrementing and decrementing is not so by the way i equal i plus one and i plus plus they're the same as far as it's concur concerned because they compile to the same thing and what happens here who wins well it could be either okay and this is um is it guaranteed that somebody wins well maybe not because they're going to keep overriding each other okay because i is a shared variable and if both threads have their own cpu running at the same speed do we know uh that maybe it goes on forever and nobody finishes because they never managed to get i less than 10 or greater than -1 okay so um the inner loop looks like this you know we load we load we add we add we store we store and notice what just happened we overwrote so thread b overwrote the results of thread a and so the hand simulation here is like oh and we're off a gets off to an early start b says ah gotta go fast tries really hard a goes gets ahead and writes a one b gets a then goes and writes a minus one a says what okay this is not and in answer to the question on the chat we're not talking about two processes we're talking about two threads inside the same process okay and so they're actually uh sharing i okay and for the person worrying about coherency and sequential consistency let's uh let's assume we're sequentially consistent and not worry about that question so for now so i uh each thread has its own stack yes but there's a global variable i so this issue we're seeing here is because the global variable i is shared okay now they may not run simultaneously under all circumstances but if there's a if we have multiple cores or we have multi-threading of some sort like simultaneous multi-threading they might run at the same time or the scheduler might switch at exactly the wrong time and so the answer is you got to think about this as if the scheduler is going to pick the worst possible interleaving because it will happen once in a thousand times or once in a million times and it'll happen at three in the morning when an airplane will crash because of the bug right all right the murphy's law of schedulers is the best thing to think about okay so this particular example is the worst example that you can come up with this is an uncontrolled race condition whereas two threads are attempting to access the same data simultaneously with one of them performing a light right okay and here simultaneous is defined even though you know one cpu maybe there's only one cpu we're thinking about this from a concurrency standpoint such that murphy's scheduler could under weird circumstances flop back and forth so does this fix it well we just put locks acquire and release around the i and the i minus i plus 1 and i equals i minus 1. did this fix it okay well it's better because we don't we always atomic atomically increment or decrement it's you know so it's the atomic operations are good and technically there's no race here now because a race is a situation where there's a read where two threads are accessing the same data and one of them is a right okay if you ever have that circumstance you've got a race and that's really bad so this is no longer a race because the acquire and release will actually prevent uh two threads from being in the middle where they're updating i at the same time so that's not a race but it's probably not still it's probably still broken because you've got this uncontrolled incrementing and decrementing going on and it's not likely to be what you wanted okay when might something like this make sense well if you each thread is supposed to get one unique value hold on for me just a sec here we're getting close to being done each thread needs one unique value uh of i then you might do something like this but you're not going to do this while loop where one's going up and one's going down okay and in fact you've already seen this example with a red block tree red black tree excuse me what you might do here is there's a single lock at the root okay and thread a when it doesn't insert what it does is it grabs the the lock of the root it does insert and it releases it b might insert uh by grabbing the lock doing an insert releasing and then doing a get by grabbing the lock inserting and releasing here both threads are modifying and reading the tree but the reason we have locking in here is to make sure the the tree itself is always correct okay so here the locks associated with the root of the tree there's no races at the operational level okay inside the tree so threads are exchanging information through a consistent data structure this is probably okay okay can you make it faster you're going to be tempted when we get you doing on the working on the file system one temptation might be well the problem is when thread a acquires the lock it locks the whole tree and we don't really need to do that there are ways that certain tree operations where you can go down and have a lock per node and deal with locking only subtrees that you're actually going to change but you got to be really careful about that so concurrency is very hard and unfortunately i was hoping to get to to semaphores today but uh even for practicing engineers it's hard okay this analysis of what you need to lock and so on is is something that people don't always get right and i just wanted to give you a couple examples so the therap 25 radiation machine there's a there's a reading that's up on the resources page for us is a great example of what happens when there's a concurrency bug so what happened was this was a radiation machine that could either do electron um could either have electrons or photons of very high uh x-ray-style photons and the way it did that was it either had a target or not if it had a target it would set a bunch of electrons at that target and what would come out as x-rays otherwise it could use the the electrons directly and the problem was there was a bug such that when the operator was typing too fast it actually screwed up the positioning that would pick the target and the dosage and they they fried a bunch of people literally they they died from radiation poisoning it was awful okay um there's a there's a interesting priority inversion that um is up on today's reading as well we'll talk about that when we get into priority inversions there's also a talk about this toyota uncontrolled acceleration problem which was also a synchronization problem okay so what i want you guys to do is take your synchronization very seriously all right now um i think unfortunately i'm not going to be able to get to the uh semaphore discussion today um if you take a look um there's some pretty good slides on on semaphores and maybe i'll see if i can put up a little more audio on that for later but i want to let you guys go today we we really talked about concurrency okay um we we showed how to multiplex cpus by unloading the current thread loading the next thread and uh getting context switching either voluntarily or through interrupts uh we talked about how the thread control block for this plus the stacks give the complete state of the thread and allow you to put it aside when it needs to go to sleep and then we started this discussion about atomic operation synchronization mutual exclusion and critical sections those four things together are part of the discussion and the design that's involved in understanding how to make a correct by design multi-threaded application and we we did some a lot of discussion of locks which is a synchronization mechanism for enforcing mutual exclusion on critical sections i gave you some good examples semaphores are a different type of more powerful than lock synchronization take a look on the slides i know they talked about this in section last week as well so you guys have a great weekend we will see you on monday and um have a good night and the get outside a little bit if you're in the local area here because we can actually breathe for a change that's good all right ciao |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_8_Synchronization_3_Atomic_Instructions_Cont_Monitors_ReadersWriters.txt | all right everybody uh welcome back to cs162 we're going to pick up where we left off on synchronization um and we were just starting to discuss atomic instructions last time so uh we're going to start however by reminding you a little bit about what we've been talking about so we've been trying to figure out how to implement locks and we uh started by asking ourselves if we only had atomic loads in stores what could we do and uh at best what we came up with uh in the too much milk solution domain here was this where we had two threads that were both synchronizing on a critical section of if no milk by milk and uh after working at several different varieties we finally came up with this and it works it's uh it's related to uh dijkstra's solution from way back when and it was solved uh in general by lanport as i mentioned um it works it's very unsatisfying because essentially every thread in the system would have to have a different synchronization protocol uh and different set of uh instructions and so you know if you look carefully you see thread a is uh different from thread b and you can basically figure out you know at the x point here if there's no node for b it's safe for a to buy otherwise you wait to find out what's going on and then b uh basically just says uh if there's no note from a then it goes forward and so we you know this was an interesting exercise but we wanted to move on and so then we reminded ourselves why locks were appealing because really what we wanted is this simple milk problem solution where we acquire a lock do the critical section and release the lock and if we could somehow figure out how to build a lock uh that was um gave us this sort of uniform api then we would you know might be in much better shape and so in the hope of doing that we we first started as you might recall uh talking about disabling interrupts okay and here was an example uh that we had last time and what i'm gonna do what i did here was i actually augmented this example which i gave in class by showing that you can have as many locks as you want you just name an integer and uh and then the acquire and release take a pointer to that just like the high level locking that we've been talking about and this particular solution uh disabled interrupts uh had a critical section which was related to uh basically modifying that lock variable and then enabling interrupts and we got to this uh after we decided that disabling interrupts to acquire the lock and enabling interrupts to release the lock was way too risky and not something that we would want to actually do and so this was our solution and just as a at a high level here you notice that the lock is either free or busy that could be zero or one when we go to acquire we first disable interrupts and by doing that what we've done is we've prevented the scheduler from switching threads and so now we've got uh we've got one thread that's currently um the active one and that's the one we're running on we check the lock's busy if it's not busy we set the lock to busy and re-enable if it is busy then we have to go through this trick of uh putting ourselves to sleep which means putting ourselves on the right cues and so on while uh you know still running right and so that's a little bit of a paradox but we talked about how in fact that isn't as much of a paradox when you realize that the way the actual scheduler works going uh through switch and back up again and so on actually uh deals with uh disabled interrupts and so on okay and so um on the release side again we disable interrupts and that's because we have a critical section and uh in that case we see if there was anything on the weight queue then we'll put it wake it up otherwise we will free the lock and re-enable and the uh the question about does the idle thread re-enable interrupts like every other thread yes so we haven't really talked a lot about the idle thread um in some sense there the idle thread is uh kind of what runs when nothing else is running clearly if there's nothing running and you're going to be in that state for a long time you have to re-enable interrupts so yes now uh the other thing i did for you guys last time and i just wanted to do it again quickly is to see exactly how this uh works and so i made an animation and notice that i've i've also changed this animation a little bit to reflect the fact that we're actually inputting the particular lock we're interested in as an address into acquire and release and so this particular simulation which is in the kernel right why well because we're disabling and re-enabling interrupts if you're really interested in doing this in user level it means you have to take a system call before you run this acquire and release but if you notice we have thread a is running thread b is on the ready queue so if you remember what that means is you're either running which that means you have the cpu resources or you're on the ready queue which means the next time timer tick you could potentially get switched in by the scheduler so both a and b are runnable the value of my lock is zero which means that nobody has the lock okay and so basically if if we never got to acquire and release here a and b are just gonna alternate back and forth just like s snt did in that example i gave you last time and the other fields we have up top here in addition to the actual integer memory location mylock we have a list of waiters which is a weight queue associated with the lock so every lock has a weight cue and those are the uh the threads that can't be runnable but instead are waiting for the lock to be released and obviously it's empty right now nobody's waiting on the lock and then finally this owner which is going to point to the current thread that has uh ownership but it's not a requirement there are some variants of locks that you will have that explicitly remember who their owner is but there's nothing about a lock that really needs to remember an owner if you think about the key analogy you lock your door um you know does that mean your own you own that lock well you could hand the key to somebody else and they could unlock and so the the notion of a lock doesn't by itself require an owner putting an owner in is really more about understanding for instance whether somebody's violating violating something by trying to unlock when they don't own okay so here we go we're running the acquire and release codes in the middle here so this thread uh runs and it hits the acquire and so the first thing it does is it runs a choir and so acquire of course disables interrupts that's what that little red circle was and it now says well if um the lock uh variable that's being passed in is equal to one because somebody's got it we're gonna do something but that's clearly not true because it's zero and so we're gonna go to the else clause we're gonna set the lock to one meaning we've got it um for our own uh edification we're going to consider it being owned by a now but this isn't we haven't actually changed anything anywhere this is just for us we'll re-enable interrupts we'll go back and now thread a is happily running along and it's in the critical section why because it's got the thread uh or it's got the lock excuse me um now uh or the lock is locked and it has come back from a choir right that means it's in the critical section it's doing fine now at some point because thread b's on the ready queue well the timer goes off and now we're going to let thread b run so if you think about what has to happen we have to unload thread a's registers put them into the thread control block execute a switch when we're done what's going to happen is a is going to be now on the ready queue and b is going to be running okay so here we go we get a timer interrupt that takes us into the kernel um that's what these dots are we interrupts are disabled during that period of time so we've entered the the switch routine at some point it you know it takes thread a and it puts it on the ready queue it takes and pulls thread b off and loads its registers into the cpu and then we re-enable interrupts and now b's running okay and notice that b's happily running that's what this blue line means without running into anything because it hasn't tried to acquire the lock that a has already acquired however the moment we hit acquire what happens well we we go to run the code we disable interrupts and at this point somebody's got the lock because my lock is one and so it enters into this uh portion of the code that basically puts puts itself on the weight queue see it's not waiting um and then going to sleep really means that now that we're going to take ourselves off of the cpu and run switch to get back to thread a okay and of course just before a runs again it's going to re-enable interrupts and now we get to keep running and so a is now running happily in the critical section b is put asleep and if you ask yourself where's the pc for b its program counter is right here uh the end of the blue arrow so when it finally wakes up it's going to come out of that part of the blue arrow and go uh and finish up the acquire and then return back to the thread so at some point a is now going to execute release okay and this is going to be important as you see here we're going to execute release we disable interrupts is there anybody on the weight queue yes there is so that means this waiter is now going to be woken up and ready to run okay and uh the mere act of putting him on the ready queue means we're going to let him continue to run and come back from acquire so um just by putting him on the ready queue he's now going to wake up and have the lock now notice i haven't changed the lock from one to zero right why well because in some sense a's handed the lock to b and things are just going to stay locked okay and so then a re-enables interrupts and continues to run notice b hasn't started running so it's not just because i unlocked doesn't mean that a uh immediately starts running all it means is that age is taken off the weight cue and put on the ready queue sometime later timer goes off scheduler comes into play it disables interrupts as the interrupt you know the timer interrupt happens it goes into the scheduler which is going to restore is going to put thread a back on the ready queue and it's going to restore the registers for thread b and get it runnable in which case it's going to emerge from sleeping it's going to re-enable interrupts and now it's going to emerge from the acquire call so from the standpoint of b it's tried to acquire and it's been in that acquire call for all this time and then eventually we reena it came back out of acquire and now it's in the critical section running okay that's my simulation so i'm going to let that go since we did it already last time are there any questions on that i just wanted to do it again to make sure we're good okay good so b actually got the lock that's a good question why do i know b got the lock because it emerged from acquire okay when you return from acquire that means you got the lock okay that's true of all of the locking protocols when you do an acquire uh you're sleeping there and the moment you do return you now have the lock okay and notice also that since b is running in a critical section and the lock is set you know that somebody's got the lock and it's going to be b that's running in the critical section all right so interrupts as i've mentioned here we've said a couple of places where interrupts are scheduling uh thread a to run and thread b to run so these uh dotted lines are all about the uh timer interrupt coming in waking uh coming in and rescheduling the next thread okay now um where we were last time is he said some problems with that solution is first of all you can't give this lock implementation to users because you can't allow them to enable and disable interrupts way too dangerous um and so uh we could have a system call so what i've got here they could be the acquire and release system calls okay but of course the downside of that is that means that in order to just grab a lock we're doing a system call which is expensive okay and so the number of locks per unit time we can have is going to be seriously limited by something that has system calls in the way so we'd like to have something that's running at user level rather than the kernel of course you know just to see where we're going with this of course if we actually have to put somebody to sleep we got to go in the kernel but that's already a long operation so the moment we decide we have to put them to sleep doing a system call at that point to put them to sleep is probably the right thing and so we'll we'll get there uh toward the i don't know two thirds of the way through the lecture or so but so the other thing that is a little more subtle is this doesn't work on a multi-processor or even a multi-core because when you disable interrupts you're only disabling them for one of the processors so yes it might be the case that when i disable interrupts and re-enable them i'm preventing the timer on that particular processor from going off and other interrupts from disturbing me and so i have a nice atomic section to make a nice lock but the moment i have more than one processor this doesn't work okay and so that uh that's a bit of a downside to this particular this particular implementation so the alternative is going to be doing something that's runs in the memory system uh that doesn't have to go into the kernel and will work across multiple processors and this is uh atomic instruction sequences okay now when we started talking about atomic actions remember we said here's a set of instructions grab the account deposit money store their account back that we wanted to put together into a single atomic sequence and so what we did is we acquired the lock and released the lock what we would like to do is mimic that idea but have it as uh instruction sequence that um is atomic okay and so in this um all of these cases i'm going to tell you about these instructions read a value and write a value to memory atomically such that no other thread can get between the read and excuse me the right okay so hardware is responsible for implementing this correctly and it's going to work on both unit processors and on multiprocessors which in some cases requires some work from the cache coherence protocol and unlike disabling interrupts these atomic sequences are actually going to work fine across multi-core or multiple processors okay and there are you know you can get uh intel boxes that have multiple uh multiple chips that are all tied together in a server system and so they're not just multi-core but they're multi-processor as well and this would work for that so here are several read modify write style instructions the most common one which you're going to find on pretty much i'm going to say every architecture it says most here but i'm going to say every looks like this and what the way you interpret this code here is that this test and set is actually an instruction so everything that's inside here happens atomically all at once in a way that can't be interrupted by any other thread and so what actually happens here what happens is you you pass in an address um you get the value at that address so this is like pseudocode i've got here you get the value at that address and at the same time you store a one there so whatever was there you grab you store a one there and you return the result okay and if you think about this it's going to help us with synchronization because if we start out with a zero there and twelve thousand threads all do a test and set it once on the same uh instruction only one of them will notice the zero was there before they put the one all the other ones will just see a one there okay so that's going to be our first primitive it's an atomic primitive because all of these things i show you here all happen atomically and 12 000 threads all doing test and set at once on the same address won't interleave okay so only one of them in that instance will turn a zero into a one and the rest of them will just try to turn a one into a one we'll see how that helps us now we can get much more interesting than this okay so here's a swap where it takes not just a memory address but a register on say the x86 or or spark processors or a value depending on what particular system you're working with but the idea here is grab the value in the memory location and store the value of the register there so this is more like a generalized test and set so if there was the number five down there and i do a swap with a six what happens is when i'm done there's a six in the memory location and i get a five out of it okay the uh even more powerful is the so-called compare and swap okay and this is a very popular one on the x86 um as well as it was on the 6800 originally 68 000 i mean and this is a little more complicated so bear with me for a moment it has a memory address and two registers and what we do is we say well if the value in the address is equal to what's in register one then store register two there and return success otherwise return failure okay so look carefully at this we have a memory address so that's somewhere in memory and it says that if what's in memory is equal to register one store register two there atomically other and return success otherwise return failure okay and this is um this is an instruction that i'm going to show you has some pretty interesting properties to it very quickly okay the last one is called load link store conditional and this was uh something that showed up originally on the r4000 and on the alpha processors and what this does is it basically lets you load a value and do something about it you can look at it you can store it in register one and then you can store something back to the location so i'm loading from the location i'm storing back to the location but that store is conditional so that if anybody else uh stored anything in that address between the point when i loaded and when i stored it fails and i loop okay and i'm not going to go into this in great detail right now but the idea of this is that this is a way to spin enough times you can you can construct test and set swap and compare and swap in a more risk fashion i guess a little simpler than having a single instruction that does all of those operations okay so let's let's focus on swa and so are there any questions on this and keep in mind that everything i show between braces here this is not like a normal procedure call all of this stuff together happens atomically as a single instruction in a processor okay questions all right so can i repeat everything in take your given instruction everything between the two braces happens all together at once atomically in a way that two threads can't be interleaved okay the way it gets implemented such that it's just one instruction for instance think of test and set is you lock the memory bus all right you lock the memory bus and uh a load store happens simultaneously okay um why is it set to one well one is uh is a good value for doing synchronization we'll show you how this works in a moment okay if the value is already one then all you did is you get a one you store a one okay if it's already got a one what was there well you load what was there you store a one you return what was there so what you get back if it was already one is get a one if it was a zero you get back a zero and you always leave it as a one okay and you'll we'll start seeing how this works ah the point is going to be uh great synchronization will result yes you can build a lock with this a much better lock than any of the ones we've seen so far okay now so um sorry about the weird animation so what i want to show you before we get uh to something uh is the following okay so here is a non-locking okay this is version of a linked list that's pretty fun okay so what i want to do is i have this simple single uh linked list single headed linked list where there's something that there's a root and it points to the first item which points to the second and so on and what i want to do is i want this uh list to be such that i can have thousands of threads that simultaneously try to add things to this and i want to make sure it doesn't get screwed up and i can do that with a compare and swap okay and the comparison without any locks okay so unlike what we've been indicating up until now where you have to put a lock around shared data here's a situation where because of the atomicity of the compare and swap we don't actually have to have a lock at all so this is going to be faster okay and so let's take a look at this code how do you add a new object to this queue i'm going to work in a loop and what i'm going to say is i'm going to grab the root value into a register load r1 right and then i'm going to store that root value into the the next item i'm trying to add into the object and then um i'm going to try to swap a pointer to my object with the current root and assuming that the root hasn't changed so notice it says as long as the root memory location is still equal to r1 swap me in as the new root otherwise i'm going to fail and do this all over again and i'm going to keep looping until i succeed and of course that's not going to loop very long because you know it's going to just everybody's item is going to get added to the list and so if you look here here's what happens when i add a linked list right i take my object i take the current route and i store it in my next pointer okay um this is not busy waiting because it's resolved very quickly okay so that would not constitute busy waiting um it's uh but notice i take the route i store it in my linked list my new item um excuse me well yeah i take the root which is this next one i stored in my my new object so now i've got a link from the new object to the next and what i want to do is i want to take a pointer to my new uh thing and put it in root but i only want to do that if somebody else didn't beat me to it because if i somebody else beat me to it by adding their item and then i store a pointer to my item on the route what did i just do i just threw out their item okay and so that's the danger here and the way that works is what i've got here so first i load the current route into register r1 i store the value of r1 into next of my new object so what does that mean that means that next is of my new object is now pointing to the old head of the list okay so that's been done right here now even if i fail and do this over and over again nobody's harmed because i'm just storing different successors into my next item and then the only thing that matters is i want to find a point in which i can change root to point at me but i can only do that if somebody else hasn't changed it between my store between my load and my uh compare and swap to point to something other than this next item assuming they still match i can put it there all right pretty cool huh questions so this is what's often called a lock free implementation and once you've got these more powerful atomic instructions there are oftentimes situations where you can build things like this don't even require locks to work now up till now what you would have done uh with your 162 knowledge we've given you is you would build this code by grabbing a lock that sort of locked the route storing the root in the next pointer and then storing the pointer to your object in the root and then you'd unlock and you know well that would be consistent all the time and nobody's things would get lost instead we have this very quick code that under the good circumstances where there's zero contention you basically take one pass through and never loop because you basically load store compare and swap done and so it's uh one load two stores you're good okay you can make these atomic operations yourself no so the question is can you make atomic operations like these with disable interrupt and the answer is it wouldn't be the same thing okay because there's no disabling of interrupts here what happens is this is just an instruction like just like add or multiply except it's an atomic one so this is something uh much more powerful than a disable interrupt because disable interrupt is like bringing a hammer uh to to um you know to tap on a window not a good idea and then and the thing that you were doing with the with disabling interops wouldn't work on a multi-processor as was just pointed out whereas this thing works fine on a multi-processor okay all right now what can we do with test and set okay so we already had a couple of folks on the chat there that we're starting to figure this out but here's what we do okay why do you need to store if you've already swapped so the point here is this store this store instruction i'm assuming is what you mean is the one that stores the uh the root old route into the next of my new object so that store has to happen okay oh good now so let's use test and set now to make a lock okay we're trying to get out of disabling interrupts and doing something better all right and here we go so um here's here's my lock it's in memory again okay so you can have any memory location you want i'm going to start it at zero the interface is as usual going to be acquire is you know pass in the pointer to my lock release is a pointer in my lock so that's standard and acquire is going to look like this a choir is going to take a pointer to the lock and it's going to do in a while loop it's going to keep looping over test and set okay and the the operation testing set here's atomic now why does this work okay it works because i start with zero means uh free and when i execute test and set one of two possibilities here either there's still a zero there in which case uh i store a one but i get back a zero the while loop uh exits and i've just exited acquire meaning i got the lock or somebody did that before me and i'm just going to keep spinning okay and then release is very simple i just store a zero there and the very next thread that manages to execute test and set is going to get back a zero from test and set they're gonna store one there and they get to execute a choir okay so the simple explanation of this is if the lock's free test and set reads a zero sets a lock to one so lock's now busy returns a zero and so uh the while exits if the lock is busy test and set reads a one sets lock to one which doesn't change anything so grab one store one everything and maybe atomic but it doesn't change anything okay and it returns a one so the while loop keeps trying and then when we set the lock to zero somebody gets to go okay now question is this busy waiting yes this is this is awful right you wouldn't want a lock that worked like this but we're getting there okay so the first thing to understand though is even though this is busy waiting and it's bad for that reason this will work perfectly well in a multi-processor it'll also work perfectly well without going into the kernel because notice there are no system calls or anything here we're just doing accesses to memory so while this is busy weighted and not great from that standpoint we're starting something that maybe we can build on okay and yes you know the question i'm busy waiting is even on the slide here right the thread is busy consuming cycles while waiting and so what will happen here is the one all the threads that are waiting will spin until their quanta runs out so they might spin for the next hundred milliseconds they'll give up the processor the next one will spin for 100 milliseconds give up the processor eventually we'll get to the thread that actually has the lock it'll get to run the critical section release and then somebody else will finally get to run so that's why busy waiting is so bad because all the threads that are waiting are basically wasting cycles okay now the the one time that and i'm going to tell you this the one time which this might be okay all right is if you have a multiprocessor with let's say 10 cores and you have only 10 threads and you know for a fact there are 10 threads then busy waiting on one core doesn't impact the other ones that might be a situation okay don't try this at home folks where it makes sense to synchronize that way if those 10 threads are trying to respond to locking as quickly as possible okay but let's see if we can do better and the one thing about this is this is actually not great for a multi-processor either we'll make a better one in a second which is every time we go through the while loop this test and set is not a read it's a write right because we read and write so it's a write operation which means if you have cash coherence the cache lines are bouncing back and forth between every core that uh is running this code and so um if you know anything about cash coherence this is awful because you're you're burning up all of your bus cycles or your network cycles moving around this lock and and uh ironically you're not even changing it you're setting it to one over and over again okay all right now atomic and so the comment uh on the chat which is interesting is atomic instructions on a 64 core processor sound hard they're not and the reason they're not hard is uh if you have a working cache coherence protocol you just pull it into your cache and you lock it so it can't be removed while you do the atomic operation and then you release it in your cache and it works fine so it doesn't matter how many cores there are the cash coherence protocol if you've got one that's working lets you build uh arbitrary um arbitrary atomic instructions like this now busy waiting is bad so the positives for what we just gave you is the machine can receive interrupts because i didn't do any interrupt disabling user code can use a lock so that's great works on a multi-processor sort of some negatives are it's very inefficient as the thread's consuming cycles the waiting thread takes cycles away from the thread holding the lock and so ironically the thread that's waiting is actually preventing the thread that would give up the lock from making progress to give up the lock and this could be priority inversion so if the busy waiting thread has higher priority than the thread holding the lock you might actually be in a place of no progress now you guys don't know anything about priority scheduling yet you will in a couple of lectures but um that's a priority inversion if a lower level thread holds the lock but the higher level thread is forced to spin waiting for the lower level thread so now the lower level threads effectively preventing the high priority thread from running that's called priority inversion and this is exactly what happened the original martian rover we'll have an interesting story for that in a couple of lectures okay but for semaphores and monitors uh where you start getting more sophisticated style uh of synchronization um thread may wait arbitrarily long and so you may end up spinning arbitrarily long so we'll get we need to do something else and any solution you give on an exam or homework should avoid busy waiting um unless we explicitly tell you it's okay which i don't think we will in most cases okay so let me uh give you one other thing called a test intestine set just so you know this is a much better solution for multi-processors where busy waiting is not a concern because you know you're consuming every core anyway and what it looks like is this okay so the idea the release is the same but if you look at what we do in acquire as we spin while the lock is kept we spin um on it so while it's equal to one we're just rereading notice that this doesn't really take any bus traffic because you get a cached copy in your cache and then you're just spinning on it you're not doing a write and then the moment it becomes zero you exit this and then you quickly try to do one test and set and then you go back to spinning and so what this does is it prevents the ping-ponging effect where all of these uh nodes that aren't actually succeeding in getting the lock keep writing and causing the cache line to go back and forth so that's called a test intestine set okay so it fixes the ping-ponging in the cash coherence protocol but it still has a busy waiting problem all right so um what can we do well if you remember what we did with to do to get rid of the busy waiting if you remember what we did with disable and enable interrupts is rather than the um the actual disabling and enabling representing inquiry and release instead we use those to very quickly uh disable and enable interrupts as part of implementing a lock okay and so let's do that so let's build uh test and set locks without busy waiting okay and so we can mostly get rid of busy waiting okay and this is a mostly get rid of busy waiting that would be okay um and if you notice here we're going to introduce something in red there called the guard variable and it's going to be global across all the locks in our system and then of course we've got mylock which was is our actual lock so if we had uh 20 locks we'd have 20 integers that are blue here and one guard for all of them okay and that guard is the thing we're going to test and set on so acquire looks like this while testing set okay so that looks like we're spin waiting except we're going to make sure that what's in the uh critical section is really fast okay and so we're not going to be spin waiting very long so we spin until we got the guard so now guard is one we know that no other thread is in this critical section for the lock implementation and then we do what we've just seen if the lock is busy we're going to put ourselves on some weight cue and go to sleep and somehow simultaneously set guard to zero hopefully that sounds familiar to the somehow put ourselves to sleep and re-enable interrupts it's some similarity there right otherwise if it wasn't busy we go ahead and make it busy and we release the guard to zero and we exit acquire so that would be the case where we uh manage to get the lock okay this is um much better than the kernel interrupt because it doesn't make a system call okay you got it now for sleep you're still going to have to make a system call because right now the only threads we have that you know about are kernel threads but the hope here is you know if you go to sleep you're going to be there sleeping for a period of time and so that's uh that's okay because it's going to take a little time to get into the kernel but then you'll be put on a weight cue okay the um the problem with uh and with releasing we're now going to um grab the guard check see if anybody's on the weight queue if they are we're going to have to do something to wake them up otherwise we go ahead and set the lock to free now depending on your circumstances might still have a priority inversion issue but let's hold off on that for now i want to i want to get to an idea on on this particular implementation okay and if you notice the sleep when we go to sleep we have to somehow reset the guard variable otherwise this is not going to work because if we go to sleep with guard equal to 1 and nobody else could release the lock and so we'd be in trouble here okay now in the case of priority inversion issues if you're worried about that you'd have a different guard variable for each priority for instance and that would take care of this issue okay now um so let's compare this to the disable interrupt solution right so this was our how we disabled interrupts notice that we built acquire and release and uh when is guard set to one guard is set to one right here oops when uh when we do the wild test and set that will set guard to one right okay now so if you looked remember this was our disable interrupt solution we had a a critical section that was quick and re-enable interrupts so notice that we've essentially done the same style of uh you know of uh redesign here of acquire and release we've essentially turned disable interrupts into the well test and set and then guard and then enable interrupts and setting guard to zero so this is essentially the same code looking at it another way here uh for instance here's how we used interrupts to build a choir so we had my lock we do acquire and release for the first case this is so silly that we don't even have separate locks right so in this instance maybe we pass in an integer pointer but it doesn't help us because there's only one disable and enable in the system okay we decided that was a really bad idea and so what we did was we turned it into this code where we use the disable and enable as critical section uh as locks around the simple critical section that's very fast okay same idea here for test and set so here the uh the basic spin weighting testing set looks like this and what we did was we took that acquire release and we used that uh type of locking to build a lock that we can afford to have uh held for long periods of time okay and the test and set on the guard itself is going to be very fast all right questions so this so this is the example the prior this is the prior example by the way so this is this is nothing new but if you notice here test and set uh we do busy wait but we busy wait for a very short period of time because all we're really doing is uh the person who grabs the lock is uh doing some really quick um critical section and then releasing the lock um and so the problem with this version that i've got in the middle is really that you don't know how long the when you acquire a lock you have no idea how long the critical section is and as we start getting more sophisticated in our locking we may have no idea how long that critical section is we and we don't want the system to be locked up because our critical section is long okay what we want is we want to go to sleep as quickly as we can if we're waiting on a lock okay and the reason you'd use the same guard for all the locks is so that you didn't have to pass a unique guard into the acquiring release it would get messy as an api if you did okay but if you if you felt like you wanted to have several different guards you could also do that there's no reason you couldn't have multiple guards with this particular implementation now let's see if we can tease this out this is all this is still in user mode everything here is in user mode so that's why this is this is why this is particularly helpful so what what can i do to help this uh this discussion so if you let's do the middle one for a moment if you notice this is entirely at user level we're just saying we're saying well test and set on the lock um we're basically spinning until we get a zero back and this says we set the lock to zero back to zero in order to release the lock this is all running at user levels everybody good on this okay there's no there are no system calls involved in this because we're just using test and set instructions which are just like adds and subtracts and multiplies which run at user level this on the right is taking this original acquire and release and instead using them here's a choir and here's release around an implementation of a lock that when we discover that the lock that we're using here you could say the blue one is our actual lock we can put ourselves to sleep on a sleep queue and so there's going to be potentially a system call in the middle to put us to sleep but that only happens if we actually have to go to sleep if we have an uncontested lock we can grab the lock really quickly and release it and not have any system calls involved okay that's right so um this right so the what was said on the on the ch chat is exactly correct what's good about this while this acquire implementation is we grab the guard that's just to get the lock implementation and we quickly check and if the lock is taken that's the thing in blue then we put ourselves to sleep on the sleep queue and then we release the guard and we are now in the release of guard as part of being put on sleep and so this thing on the far right is a way to very quickly take things that are trying to acquire the lock but failing put them to sleep on the actual sleep queue by diving into the kernel okay so the testing set is busy waiting but it's only taken for a very short time and so the busy waiting doesn't have a major impact yes okay that is exactly the way to look at the thing on the right all right now uh however let's go a little further with this okay i'm gonna i want to introduce you to the futex here so the idea is yes this is good but in fact there's something in the middle here where we have to put things to sleep and we don't have a good interface for that okay and so um if you look at uh the so-called futex system call that linux has this is for fast user space mutex um there's it basically tests three arguments okay a pointer to a an integer uh in in memory which sought to sound familiar from what we're just doing an operation which can be for instance weight or wake those are the only two we're going to look at there's a bunch of other ones that are more interesting you can do a man on futex and a value okay and that time um and then the the timeout is something uh which we can add optionally where this thing will time out if it waits too long so the value is just an integer okay futex stands for as you see at the top of the slide fast user space mutex now what this will do okay it's an interface to kernel sleep functionality but the thread puts themselves to sleep because if they call futex with a futex weight okay um and the value they pass in is uh equal to the value in memory then they will go to sleep on the sleep queue and the only way they'll wake up is if somebody calls futex weight and wakes them up okay futex is not typically exposed in lip see it's used with implementation of p threads uh so you can implement locks and semaphores and monitors which we'll get to in a second so here's our first try here which is uh for acquire we'll say well test and set um if we fail rather than looping in a tight loop we'll just call futex and futex takes the lock pointer which we know is equal to one right now that's because we failed a test and set we say we're going to wait and we what we do is we say i want to be put to sleep but only if the lock is still equal to 1. so if you think about that what we're saying here is i want to avoid a race condition where between the test and set noticing that this was still a 1 and my calling futex somebody released the lock and uh i went to sleep in the kernel but they never woke me up okay so that's exactly why we're um have futex has this extra argument so notice testing said if it's a one we go we call futex with futex weight we say here's the lock value and as long as it's still a one put me to sleep and if it's not still a one it'll just come right out of the futex right away and you'll call while again okay and so what this does is if we're lucky enough to catch a zero on the test and set we immediately exit we've got the lock otherwise this will put us to sleep until somebody releases and at the point that they release they set the lock to zero and then they say wake up one okay now if you think about this the sleep interface by using futex there's no busy waiting whatsoever in here okay if you look at it there's no wasted cycles however and the overhead for acquiring is potentially as fast as a one atomic instruction there's no system call okay unfortunately every unlock has a system call okay so this is not quite clever enough to have a situation where we can grab things quickly but um not release them quickly okay now why not if instead a while well you know uh we have to just keep looping on this until uh until we're woken up and there's a zero here and keep in mind that even after we get woken up between us between us returning from futex and trying to grab the lock again somebody may grab it on us and so we have to keep looping just to make sure that we actually get the lock which means we actually were the ones who turned a zero into a one if we were the ones that turned a zero into a one we have the lock otherwise we don't have the lock okay and that's for acquire now uh we could do this okay now if you think about the only objection you might have to this is what i say at the bottom here is you to unlock you always have to do a system call what we'd like is we like the situation with an uncontested lock where there aren't two threads that are trying to grab the lock but in general just one grabs a lock and releases it we would like that to be completely at user level as fast as possible and only when people actually have to go to sleep do we want to use system calls okay and so here's another attempt and so if you notice what i did here is i added a new variable associated with the lock called maybe there are waiters and i'll start it with false and what happens here is i do while test and set and uh assume for a moment i go from zero to one um and so that means i've got the lock i exit that's great when i when i release i set the value back to zero that's great oh by the way that's uh this should be star the lock equal to zero i'll fix that um but then i say well is the uh maybe thing that's been passed in equal to uh zero well it's equal to false if if so i don't do this arm and i emerge right away let me uh let me fix this right now i'm sorry about that so because i know this will be very confusing enough with have without having a bug in there so i'm going to say okay all right so everybody see it still again okay so if you notice here as long as there's only one thread grabbing the lock and releasing it we're uh we're good now does futex wake wake all the threads no this this last argument which i didn't talk about uh tells you how many threads to wake up so in this futex wake uh it wakes up at most one okay but if you look here um you could use futex for the actual locking but that would kind of defeat the whole purpose right because you'd be diving into the kernel um if now let's look at this uh situation where uh we fail because we grab the lock we try to grab the lock excuse me and we get a one back so now somebody else has got the lock so we want to go to sleep so what we do is we set this maybe variable to true and we go to sleep and assuming that the lock is still one this will actually put us to sleep okay forget this extra one for a moment later when the release happens and lock it set to zero we say is the the maybe variable equal to true well it is therefore we know for a fact somebody is sleeping in futex and at that point we said maybe to false and we wake them up and then they'll emerge over here they'll set maybe back to true which is a special race condition if there's multiple people on on the wait q in the kernel try while test and set again assuming they succeed they'll exit and we'll be good to go now i don't want to go into this in great detail but you can take a look at uh you should search for futex as our tricky by ulrich drepper and see a little bit of how to optimize this however i'm going to even blow your mind a little bit more because testing said is just the wrong thing to use here much better is more atomics okay and that is the lock here is not going to have two states it's going to have three if you think about what i just showed you here it's kind of like three states right there's not locked with maybe waiters false there's locked with maybe waiters false or true um those are kind of three or four options in there and in fact what we really want is three options which you'll see from that that paper if you look at it unlocked which nobody has to lock locked which is one thread's got the lock and nobody's in the kernel and contested which says somebody might be in the kernel and if we can do the right thing with this we'd like to only call the the wake up if we if we know for a fact somebody might be in the kernel and so what this code does and i'm going to leave this to the uh to the reader for later but the first thing it tries to do is it tries to compare and swap uh if it's unlocked we get locked back uh or we put a lock there and we immediately return and we win otherwise we swap in this second state of contested and as long as the thing's still unlocked we go to sleep and every time we wake up we try to swap in contested and look for unlocked otherwise uh we'll just keep sleeping and when we go to wake up only if uh the value there is contested do we wake things up so i don't want to go through this and greet detail but the interface here is really clean because there's only one integer that's got three enum values uh the lock is grabbed cleanly by either the compare and swap or the first swap so where do the atomic operations come into play they basically turn a zero into either locked or contested and there's no overhead if uncontested so as long as you've got a thread that grabs a lock release a lock grabs a lock release a lock it can do that entirely at user level at high speed with no kernel calls okay and you can build semaphores in a similar similar way all right and so uh that's an exercise for the for the class reader now and that that other paper i uh other web description i told you about the blog basically talks about the three states all right now uh the question of will this be on the midterm there might be something on atomics on the midterm that uh whether you'll have something that's complicated in the midterms hard to say okay so where are we going with synchronization we've now got i think a really good understanding of uh why loads in stores by themselves aren't enough uh we talked about disabling interrupts as a locking mechanism uh really only works on one processor works great in the kernel for certain search situations we'll be using that a lot as we go on we talked about test and set i hope you're all starting to get a flavor for how test and set works and we need to provide primitives at user level that allow us to do better synchronization so we've already built a bunch of locks and you could imagine semaphores built very similarly using locks but i want to move on to a better primitive than locks and semaphores now if you remember the thing about semaphores which you've used them so you should be very familiar with them now is their kind of generalized lock which has which is a non-negative integer and supports the following operations one initializing at the very beginning with a value two a down operation or a p operation which basically atomically decrements waits for the semaphore b to become positive um and then decrements tries to decrement by one so it'll never go below zero all right um so this atomic operation that if the semaphore is bigger than zero it decrements it by one otherwise it weights and that weight's not a busy weight it's a sleeping weight and then up or v increments the semaphore and if somebody was waiting they'll get woken up okay so that's the semaphore uh that everybody has been using okay and familiar with with project one and technically examining the value after initialization is not allowed okay so that's not part of the official interface but if you were to to google um the the posix semaphores you'd find that they actually provide that uh as an option but it's kind of outside the semaphore okay now um we then we then sort of came up with a bounded buffer uh solution using semaphores okay and and basically what we said was we really want one semaphore per constraint so if you remember it's simple to make a lock or mutex out of a semaphore by setting it initially to one and then the full and empty buffers basically represent uh how many cokes in this the coke machine example uh could be added um and how many cokes are still there to be taken all right and that led us to this uh code last time which i don't want to spend any more time on but basically we start the full slots equal to zero because there's no coke in it we set empty slots equal to the number of uh cokes the machine can take okay the mutex is equal to one because it's a lock if you remember the mutex serves as a critical section to make sure that in q and dq don't uh get messed up and then we basically we use a 74v to increment full slots thereby waking up consumer if they've been waiting for a coke to get in the machine and then we increment empty slots to wake up the producer if it's got an extra coke that it's been trying to put in the machine now this code works hopefully you've digested it from last time and actually i had this even in my extra lecture it's good it's a huge step from having just locked so if you go back to last lecture you'll see that we looked at if you tried to build a bounded buffer and you only had locks it's a mess okay so this is a little better a little better um but the problem is the semaphore here has two purposes one is a mutex and one is a scheduling constraint and if you remember if we swapped like if we swapped the p operations here uh in the producer we could get deadlocks and and it's very easy to yeah you can build this kind of code but how do you know it's correct all right and so we'd like something better and some something better is we use locks for mutual exclusion because that's what we want them for and something called a condition variable for scheduling constraints so a monitor is a lock and zero or more usually one or more condition variables okay and it's for meaning maintaining concurrent access to shared data and it basically has some languages like java actually have this natively okay other languages like c you use a library with p threads it has condition variables as a library option okay and a monitor is really a paradigm for concurrent programming so if you get a handle on how to do the monitor pattern you'll find you can do some very complicated synchronization pretty easy okay once you get the hang of it okay so what's a condition variable so a condition variable is a cue that you can sleep on that a thread can sleep on when the conditions aren't right to proceed and that condition is going to be something you sleep on with the lock held so you're only going to use a condition variable to sleep inside of a critical section so i want to stop for a moment okay because that is weird i hope for everybody the only way that you're supposed to use a condition variable is by going to sleep inside a critical section when you've determined that the conditions aren't right for proceeding and i will give you examples of how this is but i wanted to highlight that up till now sleeping while holding a lock was just a very bad idea because you deadlocked the system condition variables are made for that they're supposed to be used that way and in fact that's how you make sure that you have the right constraints and that your synchronization works okay so there's some operation standard operations like weight for a condition variable that's how you go to sleep waiting there's a signal which is how somebody wakes you up and a broadcast which says take everybody who's sleeping and wake them up and so you could think of condition variables are like generalizations of the weight queue that's normally inside the kernel but we're bringing it out to user level for you to use okay so this is an api and the rule is you have to hold the lock when doing any condition variable operations i'm going to say that again you have to hold the lock okay so we think about a monitor a monitor being a pattern or a way of programming controls actions to some shared data so there is a cue of waiting uh threads okay those are the ones that are sleeping just like we had in the kernel and a lock that controls entry and then there's a bunch of condition variables potentially that are actual threads waiting on conditions so that lock the entry is just a regular lock queue and so anybody who's trying to acquire the lock might be put to sleep waiting for the lock the condition variables are this more general thing of threads that have already entered the monitor but are now waiting okay and i think the best way to get going on this is a simple example i want to we've just looking at the double-sided uh uh buffer example with the coke machine where there's a constraint on the size of the buffer we're going to start we'll get there in just a second but let's start with a synchronized buffer that's a is an infinite buffer okay so what an infinite buffer means is we never it never gets too big we don't worry about the size but if a consumer ever comes along and there's nothing to uh take out of the buffer we want the consumer to go to sleep okay so everybody with me so this is like half of the coke machine example okay this is like half of the coke machine the consumer have and if you notice what do i have here i have a lock i'm going to call buff lock i've got a condition variable i'm going to call buff cv and then i've got a queue which is you know some sort of linked list or doubly linked list or whatever uh that we're using okay and the producer since we don't have to worry about overflowing the queue remember this is half of the the uh coke machine example all we do is we acquire the lock we enqueue the item okay so we put it on the queue why can we do that we have the lock so we don't have to worry about different threads trying to mess with the queue at the same time we've grabbed the lock and then what we do is because we've acquired the lock we can now do condition variable operations and the only operation we're doing is we're going to signal to say hey i just put something on the queue so if you happen to be sleeping there you might want to look okay that's a signal and then i release the lock so the producer here is pretty simple right acquire the lock and cue stuff signal anybody who happens to be waiting for coke because i just put some there and then release it okay this is exactly notify in java yes okay great hold that thought okay um the consumer is the more interesting part now okay so i wanna i want you to look here we acquired the lock and now i'm doing something very strange here right i'm saying while the queue is empty go to sleep condition weight i have to give the condition variable and the lock okay so i have to i have to say put me to sleep on this condition variable and here's the associated lock can anybody figure out why i have to say what the associated lock is when i go to sleep great that's exactly right because when i go to sleep somebody better release it for me okay now i'm we're going to understand that so at one level i want your brains to appreciate that the reason we give a lock is so that when we go to sleep things get unlocked so what i'm proposing you is not violating the laws of physics or programming in any way but what i want you to do is you're going to push that knowledge aside and i want you to get into the paradigm now so what happens differently here in the paradigm is that uh we're not checking for a full cue remember this is only a half half of the bounded buffer without the bounded part um so we acquire the lock and what we say is because we have the lock we can check things like what's the size of the queue we don't have to worry that by the time we if we check the queue and it's empty and we go to do something we don't have to worry that it's going to change on us why because the lock is taken okay so we have the lock so we can happily check conditions we can do anything we want and they won't change on us until we release the lock and that includes going to sleep so from the standpoint of you as the programmer here have to think i have the lock i went to sleep okay when i get woken up i still have the lock that's the way you got to think about this okay i'll get to the while loop in a second why do we have the while loop well because even when we get woken up we may not still have the condition satisfied so we have to check it again and i'll say why in a moment okay but the uh the idea is i grab the lock i can check conditions i can go to sleep on conditions i can wake up i can recheck the conditions but i always have the lock between acquiring it and releasing it that's the way you want to get to thinking about monitors even though we all know the laws of physics aren't violated because the lock is released underneath the covers for us but for now uh i check the queue i go to sleep if there's nothing there and if somebody signals me then i wake up and i check the queue again okay and if i find that it's no longer empty i know because i have the lock that i can go down and dequeue something there it's not empty because i just checked it right and then i i released the lock okay and yes the reason why we have to do the while loop is exactly because uh somebody could come in there between us getting woken up and us you know operating system reacquiring the lock for us somebody else could get in there and grab the item on the cube so we always have to check okay now part of this while loop thing i want to i want to talk about because this is important so there's two different types of monitors a mesa monitor and a monitor okay the mesa monitor uh was designed was named after the mesa operating system from xerox xerox park the horror monitor was the was named after a mathematician who developed it and if you look we've all been asking ourselves i saw several questions why the while loop why did we say while is empty weight then dq and the question which i'm sure was in your mind is why not say if is empty wait and then come out and the answer is between finding out that it's empty uh between excuse me waking up and actually getting a chance to dequeue it's possible for somebody to get in there now remember the way to think about this code is i already have the lock so if i emerge from conditional weight i know for a fact that i have the lock so the only point when somebody could get in there is between somebody signaling me and me being put on the ready queue and me actually starting to execute somebody might have gotten in there and grabbed it for us okay but that's actually a distinction between mesa and horse scheduling okay so if you look uh xerox parks mesa operating system which i think i even have a paper on if you're curious i think i put it up on the um on the reading list is what most operating systems use and that's the situation where between being put on the ready queue and that's finally starting to run somebody grabbed it for us versus the horse style which is named after a british logician much more complicated okay so let me start with the second case in the second case the signaler actually gives up the lock and the cpu to the waiter and the waiter runs immediately so the way you look at this is here i am on the signaler i acquire the lock i signal the signal immediately gives the lock and the cpu to the one who's waiting and they now can do anything they want because they know that no conditions have changed between the signaling and them running and then when they finally release they give the lock back to the original signaler and they get to go okay it seems like great semantics it's easy to think about maybe but um the problem is it's messy from a from an implementation standpoint and it's actually really bad from a cash standpoint because you got this guy over here on the left who's happily running doing all sorts of stuff and just because he decided to signal somebody uh he loses the cpu and all of his cash state okay so this seems um like maybe it's not great from a performance standpoint okay forces a whole bunch of contact switching mesa on the other hand says the signaler keeps the lock and the waiter is placed on the ready queue so here what happens is when we signal all we do is we put the uh the waiter on the ready queue and we keep going and we release the lock and whatever sometime later the timer goes off the scheduler runs and we wake up and uh we've been on the ready queue all this time and we go back and check our condition okay and so practically we have to check the condition again just to make sure nothing's changed from the point at which we were signaled to now okay so most real operating systems all have this mesa scheduling more efficient easier to implement better for the cash state all right questions now uh let's do our our fully bounded circular buffer since it's been asked about a couple of times so the only real non downside of this is it's non-deterministic that's correct um but in fact the performance uh advantages here are uh far outweigh any of the non-determinism uh because you don't want to have a situation where non-determinism gives you an incorrect result because you're going to design things correctly correct now uh there's too many other sources of non-determinism so uh that one's not worth removing so if you look uh for the circular buffer we're gonna have one lock and two uh condition variables one for sort of the uh the buffer being too full and one for the buffer being too empty and of course these condition variables don't have anybody waiting on them and so now the producer is going to acquire the lock and it's going to say while the buffer is full weight on the producer condition variable and when that is no longer full then we enqueue the item and we signal the consumer okay and in the case of the consumer we sort of say well while the buffer is empty we wait on the consumer condition variable and when that's done we dq and we signal the producer so if you were to look at this uh this is essentially mirrors what we did for uh the 74 version of this but it's much cleaner because we're waiting on uh we're waiting on condition variables we're doing so inside of a lock so we don't have to worry about any of the things we look at changing because somebody else is messing with us whenever we run all of the code the while loop the buffer full check the condition weight all of that stuff is all running with us holding the lock okay now so what the thread does when it's waiting is it's sleeping it's not busy waiting okay so condition variables are interfaced properly with the operating system they'll put you to sleep and by the way you can imagine you can build conditioned variables using futexes okay now why the while loop mesa semantics most operating systems when the thread is woken up by signal it's simply put on the ready queue may or may not reacquire the lock immediately what about reacquiring the lock well because of the way you think about this the semantics are i never run code without holding the lock if i had the lock first so i grab the lock i go to sleep yes i'll release the lock temporarily under the covers but i'm not running and before i get to run again the lock will be reacquired by the system before i return from condition weight okay so i always have the lock okay um ah how are they both in the critical section if they're sharing the lock so the answer is they're not both running in the critical section one of them sleeping the other one's running okay uh so what actually happened let's say the producer is sleeping on the conditional weight here the consumer now says well while the buffer is empty uh which it um it's not there's something there so uh because the buffer is full i go to dq and then i signal the producer so notice that i'm running i have the lock this guy is not running he's sleeping so the lock is only really owned by me okay um eventually i'm gonna signal still i own the lock i'm gonna release the lock now i don't have the lock i'm gonna keep going later the scheduler is gonna kick in um and the first thing that is going to happen when i get pulled off the ready queue is the implementation for conditional weight will reacquire the lock before it emerges from conditional weight so you could almost think that i was sleeping on the condition variable and then i'm sleeping on the lock and then i emerge okay but i don't want you to think uh and that's because condition weight gives up the lock under the covers without telling you okay and then it grabs the lock without telling you that's why this and it's this that makes it much simpler and yes it actually sleeps okay and it may or may not acquire the lock immediately but any code that you run has the lock now i want to start on this a little bit because i want to get you guys an idea that monitors are much more powerful than what i just showed you and that's why they're so cool okay so let's talk about the reader's writer's problem with a database so a database you want to have many readers and one writer and you can't have those two mixed so when you're writing there can't be any readers and when there may be many readers but there can't be a writer okay it does actually sleep in conditional weight yes so there are two classes of users here readers and writers and using a single lock on the database is that sufficient to get us the semantics we want if we grab we lock the database between before we do our read or a write does that give us the best behavior anybody and why does not give us the best behavior homework one flashbacks yeah because you can't have more than one reader that way so what we're going to do is we're going to come up with a solution using monitors that lets us have multiple readers and one writer okay and so the correctness constraints for this problem our readers can access the database when there are no writers writers can access the database when there's no readers or writers because we can only have one writer at a time and only one thread can manipulate state variables at a time now hold your breath for a second don't do it too long so you turn purple but i'm going to show you the state variables that are going to let us do this okay so the reader is going to wait until there's no writers it's going to access the database and then check out which is going to wake up a waiting writer if there are any and the writer is going to wait until there are no active readers or writers access the database and wake up readers or writers okay and we have these four state variables and two condition variables now this sounds bad because it's complicated but it's not okay very simply the four variables are as follows how many active readers are there that's how many uh readers are actually reading the database how many readers are waiting okay that would be the number of readers that are not allowed into the database because of a writer similarly the number of active writers is the number of writers that can be in there if you think about that our constraints say that aw can only be zero or one and the number of waiting writers is the number of writers trying to get in there and then we're going to have two condition variables to sleep on and if you look at the code for a reader what you're going to do is you're first going to check yourself into the system so all monitors you start by acquiring the lock and now we're going to check conditions now what's cool about this is we acquired the lock so no other thread can get in there which means i can do arbitrarily interesting checks and my check as a reader is if there are any active writers or any waiting writers i have to go to sleep because as a reader i'm not allowed in the system because if there's an active writer clearly i can't read and if there's a waiting writer then that means there must be an active writer and that waiting writer is going to get to go next and so as long as there are any writers in the system at all this solution is going to increment the number of waiting writers waiting writer plus plus going to sleep on the okay to read and then when i wake up from that i'm going to decrement waiting writers and try again okay and i'm going to keep looping in here until there are no writers of any sort in the system okay and then basically uh when i've exited this i know that there are no active writers or waiting writers and so now there's one active reader namely me okay and i release the lock okay and i do the actual database access okay wr is a waiting reader okay and then when i'm done accessing the database at this point i grab the lock again i check out by decrementing the number of active readers because i'm no longer active i'm not in the database anymore okay and if uh at that point there are no active readers but there is a waiting writer then what i'm going to do is i'm going to signal the waiting writer to wake up okay and i'm going to release the lock now notice what's interesting here is if active readers are not zero then i know that some other reader will come on and wake up waiting writers and if active readers are zero and there are no waiting writers then there isn't going to be anybody in the system to worry about or there's an active reader or active writer who will wake us up so we'll go through this more i'm going to actually if you'll bear with me talk this through a little more so i don't leave you in a totally confused state but there's a couple of things i want uh to thank you to think about so first and foremost rights are prioritized over reads in the way that it's been done okay and the reason the writers are prioritized over the reads is because well is that a good idea well standard data shows that they're far more readers than writers and that writers tend to happen quickly and you would like all of their rights to be reflected in the readers as quickly as possible so a writers are not they're rare and b we would like their state to update so we give priority to writers over readers only for this example that's not to say that every version of this might be the right thing okay um the other thing to notice is this is a pattern grab the lock check conditions are the conditions right uh for me to go if there are writers in the system the conditions aren't right right now at that point i update information to let other people know that i'm in the system waiting and then i go to sleep and i try again and then when i'm ready oh the conditions are now right i increment the fact that i'm an active reader i release the lock and notice that i've actually released the lock before i go into the database okay and you might ask you know why release the lock here can anybody tell me great exactly right so another reader can come along so notice uh this is like metal locking okay this is not locking this is meta locking because the lock is protecting my invariance on entry into the system it's not locking the database it's checking whether my constraints are right to allow me into the database that's different thing that's why i release the lock before i even do the database because my entry code is checking conditions okay and my exit code will wake up anybody who needs to be woken up okay and this is gonna exactly allow multiple readers okay great because the next reader will come along and go through the same thing notice that there are no no writers in the system and get to go forward now what we know about the right side is it better be the case that if there's a reader in the system there'll be no possibility for a right writer to start right that would be bad so the writer using the same pattern is going to acquire the lock they're going to say well if there are any active writers or there's an active reader either of those situations kill it for a new writer okay it kills it for the new writer because a new writer can't write if there's either an active writer or an active reader so they got to immediately go to sleep so our condition for entry is different in this case if there's either an active writer or an active reader then what we're going to do is we're going to say oop conditions aren't right i'm going to increment the number of waiting writers i go to sleep when i get woken up i can decrement the number of waiting writers because i'm not waiting anymore and go and check my condition again okay and rights must wait until all of the running readers are dead yes okay and at the point when i finally have no active writer or active reader then i become an active reader i release the lock just like i did with reads and i do the database access okay now why is it an issue if there's active active readers for a writer to go forward well the readers are doing something with the database that uh may look at many different fields and a writer's gonna go in there and start screwing that up by changing the consistency so the assumption here has to be that the writer is gonna touch things that are going to screw up the reader it's not the case that the reader's just going to look at one thing the reader is looking at a full database record or what have you okay so in this particular set of cons it's kind of like cash coherency but bigger okay you could imagine that records may have many fields in them and if the fields aren't consistent uh then the reader is going to have problems and so a writer shouldn't come along and change anything until there are no readers left okay and so the exit on the right side is going to be similar we're going to grab the lock we're going to decrement we're no longer an active writer um and then we have an interesting exit here like if there's a waiting uh writer then we're going to wake somebody up okay otherwise uh if there's any there are no waiting writers but there are waiting readers we're gonna broadcast so notice the difference i wake up one writer but i potentially wake up all the readers and then i release the lock okay now interestingly enough here let's i normally i don't do this at this point in the lecture but what would happen if there were many waiting writers and we broadcasted to all of them to wake up do we get bad behavior by having more than one writer suddenly using the database well let's think this through notice what happens when a when a writer wakes up from condition weight the first thing that happens before they come out is they grab the lock so only one of those many riders who woke up got the lock and got to emerge from condition weight they're the ones that decrement the number of waiting writers go through the loop notice that there are no longer any active readers or active writers and get to release the lock after incrementing active writers all the other ones then wake up and turn but at that point they're gonna look through and they're gonna say oh active writers is now one you know greater than zero so they'll go immediately back to sleep so even if we mistakenly broadcast and wake up all of the writers only one of them would get through now that's a great question what if they're interleaved and the answer is why can't they possibly be interleaved what's the paradigm here because of the lock right i forget for a moment that uh we go to sleep and leave out locks okay put that aside in your brain what you need to think now is all of the code between acquire and release has the lock there can be no interleaving inside between acquire and release therefore when i'm checking conditions and changing waiting writers and all i'm reading writers all of that stuff there's only one thread doing that at a time because it has the lock and therefore this consistency there's no interleaving there in that header code as soon as i release the lock then there could be interleaving but that interleaving is is set up to only allow multiple readers or a single writer okay well broadcast down here the only reason only one of them wakes up is it's not actually that only one of them wakes up they all wake up but only one of them finds that the conditions are right for it to run it gets to run the rest of them put themselves back to sleep again okay and there's no priorities here just the one that got the lock first gets to notice uh that it's ready to go okay all right okay why broadcast instead of signal well in that case we want multiple readers and if you look at the reader case if there are no writers then all the readers get to go all right why get priority to writers well we gave we talked about that earlier now what i'm not going to do now because we we've run out of time but we'll do it next time is it's fun to look at a simulation where you can actually see these variables go up and it might help you a little bit but i'm gonna um i'm gonna let you guys go for now i'm sorry we've gone over a little bit but uh we've done a lot today this was a big lecture so i apologize for all the topics here we've been talking about atomic operations is something that runs to completion or not at all but we've now moved it to be looking at uh instructions so we talked about hardware atomicity primitives okay disabling interrupts test and set swap compare and swap these are all atomicities okay we showed several constructions of locks we've looked at ways of using disabling of interrupts looked at ways of not having busy waiting and not tying up resources and ultimately what we did is we separated the lock variable from hardware mechanisms to protect uh the implementation of the lock okay the final the other thing is we talked about semaphores again as being good but maybe too complicated and so we started introducing monitors here which is a lock plus one or more condition variables and uh monitors are really representing the logic of your program and next time we'll we'll talk about the reader's writers a little bit more and show you um walk through an actual simulation so you can see all those different variables changing okay now um now the question about uh on the on the uh chat here which i would be happy to answer people are welcome to go if they like but if one thread signals or broadcast does the scheduler have to wait until the thread releases the lock before it wakes up no because we have mesa scheduling what happens uh is the broadcast or signal merely takes threads and puts them back on the ready queue that's all it does it doesn't have to wait for anything okay just puts them on the ready queue and then it returns running okay and uh well um and then when the thread wakes up from the ready queue at that point it tries to reacquire the lock inside the implementation and it might get put immediately to sleep again because the lock's not available and all of those things that have been woken up will go through one at a time all right and yes we'll talk a bit about uh implementation of monitors um if you like but i want to let you guys go i hope you have a great weekend and uh we'll see you on monday ciao you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_7_Synchronization_2_Semaphores_Cont_Lock_Implementation_Atomic_Instructions.txt | okay welcome back everybody uh to 162. um today we're gonna pick up where we left off we were talking about synchronization and um i didn't quite get to semaphores uh last lecture i didn't record a uh a little supplemental lecture for those of you that wanted something uh for your project spec but um anyway we're gonna pick up where we left off and then dive into some actual details about synchronization implementation so uh if you remember last time we started talking about how it is that uh multiple threads of control can get implemented inside of a kernel and we use this abstract uh stack model and we said suppose that we've got two threads s and t and they're both running this code where uh they start with a and then a calls b and then b just keeps yielding over and over again and what we saw was that's going to have a a stack that looks somewhat like this where thread s calls a which then calls b which then calls yield and the blue is the user code and then it dives into the kernel because yields a system call and at that point we dive in to uh execute run new thread and switch and the switch as we talked about um saves out all of thread s's registers and then switches uh s's stack to t stack at which point uh switch returns but in the case of it returning we now have a situation where um we're actually even though we called switch on uh s's stack because we changed the stacks we're returning in t stack and so at that point uh the return from switch actually takes us to the instance of one new thread that t had done originally which will then uh return across to user space uh restoring the user stack uh giving us yield which will then go back to the while which will then call yield which we'll call run new thread which we'll call switch which will save out t's variables switch the stacks and then we'll return back up again and we'll get back and forth and as a result we'll end up multiplexing s and t forever here but the key interesting thing about this was this idea that uh the switch routine really is just saving all the registers uh including the stack returning uh from switch then returns in a different stack which then basically keeps executing t at that point the other thing that this uh particular code was uh intended to show is this notion that we have a user stack and a kernel stack are associated together and this kernel stack is typically called the kernel thread oftentimes because when we're running inside the kernel we're running on that stack and that's a thread uniquely associated with the user thread okay was there any questions on this this particular diagram uh takes a little getting used to because it's this idea that just by changing the stacks and returning from switch we're back executing a different thread and what's interesting also about this is that the combination of the user stack the uh kernel stack and the associated registers basically define everything about a thread so you can put it on the background you can put this on whatever particular um uh what do i say i want to say every particular cues that you want i noticed somebody was asking about my background i forgot to turn that back on sorry um so the program counter is shared between threads because uh there is only one program counter we're assuming that uh there's only one hardware counter now thread s and t however uh each have their own current position counter so i'm not sure which you're asking about there's only one hardware program counter because there's only one processor but what we do when we are saving thread s originally is we save out all its registers which would also include the program counter of switch and then when we restore on the other side and return from switch we're kind of now running in thread t so what is restore mean restore means swap in the program counter for thread t so there's only one physical program counter but we have two um virtual counters because we have two threads okay and all the registers are saved yes okay all right now um the other thing i showed you about here is uh this idea of using a timer interrupt to return control so this is a solution to our dispatcher problem uh which is what happens if one of these threads goes into an infinite loop and never calls yield okay that's a problem so in that instance um we need to have something happen and as we talked about there are many options one of which is an interrupt so we showed how even if this blue routine is busy in an infinite loop the interrupt comes along uh the interrupt takes us in to the the stack inside the kernel and then at that point we can just run new thread and switch and we'll get exactly the same switching as we did back here we could have a situation where t is doing an infinite loop and not yielding but the interrupt will force us to get into red run new thread switch and then we'll switch over to s and so we'll at least get this fairness and that each uh thread s and uh the now broken thread t both get a fair use of the processor in that instance okay all right good so the timer interrupt routine looks something like this and maybe does various things involved in the interrupt and then called run new thread okay so we talked about that last time i wanted to give you a little bit more uh interesting information about this so the interesting question here is does the kernel thread also receive timer interrupts the kernel thread uh is not probably going to receive another timer interrupt because if we got a timer interrupt in the middle of a timer interrupt we'd get a recursive uh problem that would uh mess all the registers up and so on so when you take an interrupt as i showed last time you can take a look at last uh lecture i talked about the interrupt controller and what the interrupt controller does is as soon as you take an interrupt it disables everything and then the kernel as part of this entering into the interrupt routine is going to disable timer interrupts before any new interrupts are before any interrupts are re-enabled so you won't get recursive timer interrupt inside of a timer interrupt now it could be that there's other interrupts that are very important that we leave enabled so it's possible while you're servicing one interrupt to get a higher priority one that that does happen but you're not going to get a timer interrupt inside of a timer interrupt okay now what i wanted to show you is a little bit of the instantiation of this so the x86 which is uh what you're going to be running pintos on has a couple of things that make it um uh work you could decide whether you think this is easier or harder it's a different processor than say risk 5 that you dealt with in 61c but among other things there is this task state segment tss format and a lot of operating systems like pintos and linux and all of these uh only have one tss at any given time the way the x86 was designed is every task or thread in this instance would actually have its own tss but that turns out to get too messy and it's not very portable so instead typically there's only one tss but what's important in the tss if you look here is the fact that there are stacks for privilege level zero one and two okay and if you remember there are four privilege levels uh for the x86 but we only use zero for the kernel and three for the user but what's important about this is this uh privilege level zero stack and so whenever you make a system call or take an interrupt which takes you from u to k that means user to kernel what happens among other things is yes the privilege level goes from three to zero but uh in addition to saving out things like our current um stack while saving on our current uh instruction pointer that's uh that's our program counter and flags and uh the current stack pointer that we're running with we also save out uh we also pull back in the um kernel stack so if you notice in hardware the act of enter either getting an interrupt or a system call actually stores uh takes the uh privilege level zero stack and stores it into the processor so just the mirror act in the x86 of going from user to kernel will actually switch that stack out for you so this uh this boundary which i'm showing you here with blue bounding into red actually has um hardware support in the x86 in other processors this interrupt routine the first thing it would do on entering the kernel is it would have to switch those stacks but as you'll see if you take a look in pintos to i show you down here tss.c and interstubs.coms you'll see that that tss is being supported and that the um the stack is switched automatically in hardware okay and uh the other thing is once these things are saved then the handler saves all the other registers and so on um and then the kernel goes ahead and does its work and on return uh all of that's undone so among other things the um the user stack was actually pushed on the kernel stack and that gets popped off uh as part of returning to user level so if you take a look here for instance this is just showing you a diagram we'll do some more details about this later but this is roughly what's happening uh not just in pentos but also in linux and some of the others but when you're busy running uh user code here what you see is uh that the code segment instruction pointer this is the program counter we've been calling it is pointing somewhere in the user code space the stack pointer is pointing somewhere in the stack space of the user okay and as soon as as soon as that interrupt occurs what happens is as i mentioned automatically in hardware you see that the stack pointer here got pointed into the kernel that's what ss colon esp is and the instruction pointer or program counter gets pointed into kernel code so that all happens in the transition into the into the kernel now the question here on the chat about will we learn how to formulate interrupts in assembly or is that something we need to learn by ourselves it's going to be some combination of learning it by yourselves we're going to tell you at a high level about this in class but you're going to have to read that dot s file for instance to see a little bit more about what's going on but um i will have more to say about entry into the interrupt handlers in a couple of lectures maybe even next time but um if you notice so just the mirror act of the system caller interrupt in the x86 changes these stacks so these registers here that you see are actually processor registers and that changing into kernel mode has automatically switched those guys for us and we've gone ahead and saved what i'm showing you here in blue is you see that there are some additional registers that are part of the user code and so we need to save those two before the kernel starts running because otherwise we'll mess them up and so even though the kernel automatically pushes the old cs eip and ss esp onto the kernel stack that's done in the heart in hardware the kernel itself is uh the code that we've now entered is going to be responsible for pushing the rest of those user registers into the kernel stack before it starts doing some computation and messing those registers up that's what's being shown in red here okay now this uh page table uh pointer notice isn't changing because right now we've just had a system caller interrupt we're not changing anything about the current process okay and so that's just going to stay the same throughout but now we're in the kernel we've pushed all of the user's information onto the kernel stack um and then we can just be executing kernel code and and calling functions and so on because we have a kernel stack which is safe okay okay this ptbr is again this is the page table base pointer base register all right and that's basically pointing at the address space okay now um and once we're done inside the kernel let's suppose this was just a system call or it was an actual interrupt that wasn't going to switch processes then we're ready to resume and at that point what we do is we just reverse the process so we restore the registers that the um were set there in software we pop them off the stack and we put them back where they were and then the last thing is going to be this uh interrupt return instruction which is going to automatically restore the um the users uh program counter and stack pointer and then continue executing from where we left off so this what i've shown you here is the simple example of how either a system call or an interrupt that doesn't change the process would happen and okay and you can see the the uh the fact that we switch automatically to the kernel stack that's what this is down here and then we switch back to the users um just to give you a little bit of a difference if you're interested in the scheduling portion you can take a look at switch.s and by the way dot cap s tends to mean assembly but if you take a look at that code in pen toss what you'll see is once we get to this point here where we've saved everything on the kernel stack of the user code we can now do something interesting okay so if we schedule a new task because we're going to switch from s to t in our previous example what happens now is now we swap in a new page table base register because it's a completely new process let's say we've got new um we have new user registers okay i'm showing you this just before the end and when we do the instruction return we're now going to end up returning to a different uh process okay and so we had the blue process and the green process and we're switching back and forth okay um the question about is this analogous to how call instruction automatically pushes things onto the stack yes this is uh there's certain things that are automatically pushed in a call instruction certain things that have to be done manually um and this uh if you think about a little bit these ones that are pushed automatically are kind of the minimum required to maintain the correctness of the kernel stack and save those user those user registers that are going to get lost if they're not saved right away and then it's up to the software to decide what else to save and restore okay all right now that for now until we get into something else the question here about how does the page table base register know uh where to look in the kernel code um i'm going to basically take a page from uh pre to uh 2018 and say that the uh the user's memory space actually has the kernel in it as part of the upper part of the memory space and therefore it's just a matter of when you switch into the kernel you now have permission to use those page table entries okay as you're well aware uh after we had meltdown the kernels had to get a lot more careful about that okay so let's just say now that the uh the kernel has access to its space and uh just by switching into kernel mode okay all right now uh the other thing that we did um last time was we talked about using locks to fix this banking problem and so if you notice we had a problem that um accounts uh modifications weren't atomic and so we put locks around them giving us a critical section and i had this little animation that was a little bit rushed toward the end of lecture six so i wanted to make sure we got it in this i also say it in the supplemental but if you look at the critical section what these locks mean is even when we have a bunch of threads all tr contending for that critical section only one of them is allowed in at a time and that's because the lock acquire takes lets the first one through and the rest are put asleep and when you release what that will do is that will wake up one of the threads afterwards so now that a finishes and goes through release then b's allowed and see etc okay all right and so you got to use the same lock with all the methods so we have to do lock acquire and release for all of the um things dealing with an account because the account is the thing we're protecting with critical section so that includes deposit withdrawal and all the other things you might do with an account all have to be protected with the same lock and that leads also to this example i gave you also last time which is well if we had a red black tree we could have a single lock at the root and then uh all the operations that thread a and b might do would acquire the root do modifications of the tree and release it as long as we did it this way we know we're correct because the the structure of the tree maintains its consistency because we lock it before we do any modifications and therefore only one thread's allowed to be in it okay now the kernel um and uh threads now can go back and forth and exchange information through this tree without worrying about the tree becoming uh incorrect because of race conditions i mean there are ways of making it faster by putting more locks in the middle of the tree but that gets complicated so a question here about our kernel thread register saved somewhere when the program is running uh user code and the answer is uh they don't need to be okay and the reason is if you think about this example uh we we use the kernel kind of like it's a procedure call right we call into the kernel with a syscall or an interrupt calls into the interrupt handler and so the registers that are needed are um created on the fly by entering that thing and then there it's done when you exit any state that needs to be maintained longer than that's going to be kept uh in global kernel state maybe example being a red black tree for instance for scheduler what have you but we don't save the kernel's registers when we go back to user mode because that lower half of the kernel the part of the kernel which we call the kernel thread is really only there explicitly for when the user is not running and it and we sort of generate all of the things we need there on entry into the kernel all right now so um the thing that i didn't get to at the end last time which i really want to make sure we talk about i realize i'm taking a long time on synchronization but it's the hardest thing i would say that you learn in this class so the definition of what we're talking about here is that a bounded buffer where we have multiple producers and multiple consumers and the producers put stuff on the buffer and the consumers take things out of the buffer okay and it's a finite buffer and so we need some synchronization first of all to coordinate the buffer because we don't want the buffer to get screwed up all right and the second thing is we have to somehow allow the situation where the buffer is full and a producer comes along we need to be able to put that producer to sleep and wake it up later and when the uh and when a consumer comes along and the buffer's empty we also need to put it to sleep so in addition to keeping the buffer consistent which is similar to the question about the red black tree in the previous slide we got to do something else with the producers and consumers to put them to sleep okay and that's essentially what i said here we um and we don't want the producers and the consumers to have to run in any lock steps so we'd like fully asynchronous behavior um producers can arrive at any time consumers can arrive at any time all right and i gave an example of gcc um here where the pipes which you guys are all familiar with now that you're working on the shell homework is kind of one example where each one of these pipe symbols represents us represents a finite buffer okay and another is a coke machine which is my favorite example because uh you know this has a finite number of slots in it and when the coke uh delivery guy shows up if the machine's full uh can't put any more coke in there so what happens well we put him to sleep uh well maybe that isn't quite the way it happens but that would be this analogy and then students come along to buy coke and if there isn't any coke what do you do you fall asleep in front of the machine because i know you're all um you know you desperately want your caffeine and so in this example multiple producers might come and there's a finite number of slots in the machine and multiple consumers might try to pull things out and uh you know there certainly is multiple coke that would be in there at any given time but perhaps it's empty okay and you know obviously there's lots of examples of finite buffers like web servers and routers and everything okay um so yeah busy waiting is exactly right this is the equivalent of the guy shaking the machine until the uh the delivery guy shows up okay so that's going to be considered bad programming style so here's our basic buffer okay which is a structure that can hold i'm going to say it can hold any types here whatever your you know whatever you want to put into the buffer you know these are coke bottles or they're you know structures of type x and then there's a read index and a right index and the read and write index indices are just integers that wrap around and clearly you've got to be careful that you don't try to put too much in the buffer because you'll start overwriting items in the queue and you don't want to read too much because then you'll get you know the read in front of the right and you won't be able to know when the queue is full anymore so we need to make sure that the writes and the read indexes are kept uh consistent okay and um i i'm sure that you've learned about circular buffers in 61b or what have you what's tricky about what we're going to do here is we need to have the ability to have many producers and many consumers and things just work okay and we need to come up with what needs to be atomic and so this was our first cut at this uh which is well what we're going to do is we're going to acquire a lock on the buffer for the producer and as long as the buffer is full we're going to spin and then we're going to enqueue an item and then we're going to release the buffer lock and in this particular implementation what's good about this is uh we don't have to worry about the queue getting screwed up because the queue item like in queue is inside the lock we've acquired the lock we've done something we've released it so that part seems okay maybe and for the consumer similarly we acquire the buffer lock and we wait for things to be empty and then we excuse me while it's empty we wait and then we dq an item and release and once again because we've acquired the lock before we dq then we know that we're not going to mess up the implementation of the queue okay but that's the only good thing about what i just got here okay this is just bad right and hopefully you can all see that let's think about this for a moment if the producer comes along and acquires the lock and then decides the buffer is full and it spins it's effectively tying up the processor while holding the lock which means that it's busy waiting for the buffer full condition to go false but that can't go false because the consumer comes along and tries to acquire the lock and goes to sleep because the lock's taken okay so this is a unresolvable situation okay so this is a bad implementation and so then um the second cut at this was simply well maybe what we do is if the buffer is full we will release the lock and then reacquire it and just keep doing that over and over again until the buffer is not full and then we enqueue and release and we know for an for instance that because of the way this is laid out uh we reacquire at the bottom of the loop and then when we check the buffer full we have the lock so that when we enqueue the item we have the lock okay yeah that's a case where the delivery man is blocking the machine and none of the students get their caffeine right and so the consumer is the flip side of this and believe it or not this kind of works i mean it does work but it's horrible right this is not a good use of time either this is only a little better than the previous one this one's a little better in that it doesn't deadlock and will eventually go full but if you have uh a producer that shows up and the the buffer is full but there's no consumers the producer is just gonna go unlock lock unlock lock over and over and over and over again uh wasting processor cycles okay so this is what we typically call busy weighting all right because we're we're wasting cycles and busy waiting is is a way to lose points on an exam or whatever or certainly an implementation because you're wasting cycles doing nothing okay so we want to do better than that and that led us basically to higher level primitives so what's the right abstraction for synchronizing okay so a lock is good but a lock isn't uh quite enough now there's an interesting question here couldn't you just pull and the answer is well polling isn't really helping you here because the assumption is the producer can't do anything but deliver bottles okay if the producer could somehow go away and check again maybe that would be okay all right but this particular situation the producer is trying to produce and it doesn't have anything else to do okay so um good primitives and practices are important so what we're going to do in this class is we're going to start talking about other primitives even in locking okay but first today we're actually going to implement locks to get us there but um you know synchronization is a way of coordinating multiple concurrent activities and as you're hopefully getting the flavor uh already if you do it incorrectly weird things happen and the weird things happen at the least expected moment okay right that's the murphys law scheduler which you have to remember is always present or the malicious scheduler that means that essentially uh it will screw you up and mess up your synchronization at the worst possible time and find the most obscure synchronization condition all right and that's that's the murphy's law scheduler so semaphores which i mentioned uh a while back basically were first defined by dijkstra in the late 60s and they're the main synchronization primitive in the original unix and the definition is a semaphore has a non-negative integer value and supports two operations uh p or sometimes down which is an atomic operation that waits for the semaphore to become positive and then decrements by one okay and what this says here is twofold one the semaphore can never be less than zero and furthermore if anybody tries to execute a p operation on a semaphore that's zero it weights okay but this isn't a polling weight this is a i'm gonna we hope a good weight right just like with lock acquire we've been implying that that acquire puts you to sleep or does something better than wasting cycles this particular weight is going to be similar okay the opposite is up or v which is an atomic operation that increments the semi-four by one and uh wakes up any thread that happens to be uh sleeping on p and you can think of it as waking only one of the sleepers because if it wakes them all up only one of them will actually be able to decrement it from one back to zero and so uh basically you know the atomicity is maintained by both p and v that atomicity meaning you know when you try to execute a p operation or a down operation uh you'll never go below zero and you'll have to wait if you do and when you um execute an up or a v operation uh if you went from zero to one such that somebody was sleeping you'll always wake up somebody okay and and p basically comes from proveran to test and v from fair hogan to increment in dutch and that's because dijkstra uh created them okay so semaphores are integers just like integers except uh first of all they're whole numbers because you can't go below zero okay and secondly the only operations allowed are p and v or up and down depending are down and excuse me depending on your implementation okay um and the question about which of the two sleeping threads will be woken with v it's unspecified okay so if you're if your application fundamentally depends on a particular thread like the first one being woken up that's not part of the spec unless it says so okay and you should always assume it's a non-deterministic choice unless uh you're told otherwise okay so the operations only operations are p and v and the operations must be atomic as we just described okay and so notice by the way that you can't read or write the value except initially okay the uh so notice that's part of the interface so you set the integer at the beginning but you can't read it later you can only do p and v now there are those out there that'll say well i looked at the posix version of semaphores which i encourage you to do and what you'll find is that uh they do give you the way to read it but um it's technically not part of the interface okay so keep that in mind and here's a railway analogy okay which is uh basically the semaphore we set it to two and with the first train comes down here to the track before it was able to pass the semaphore it does a a p operation which doesn't put it to sleep because the value uh was greater than zero so the value goes to one that's fine here the value goes to zero that's fine these trains by the way are hanging out and having coffee with their cameras on and then another train comes along now at this point it tries to execute p but it's not put to sleep okay all right now as soon as one of these trains leaves the yard and executes a v it's gonna wake up our guy here so the value will increment to one briefly and then go back to zero okay so that's the behavior of semaphores which you're now well aware of because of your design review um if you uh look at the two uses of semaphores that i talked about in my supplemental one is mutual exclusion otherwise known as a binary semaphore or a mutex this is just like a lock this is a case where you do a semaphore p to grab the lock and a semaphore v to release it and notice that the initial value is one so if you think about this exactly one item is going to be able to be in the critical section at a time which is going to be exactly like a lock okay so you can technically initialize it as a one but increase it as much as you want that's a good point however in that case it won't behave properly like a mutex you might say well wait a minute that's bad well the answer is uh you violated your own spec so synchronization is a is a contract between you and all the users and if you're the only user you've just violated your spec and you've broken your own code so um what i'm going to show you is how to do good synchronization if you put bugs in your code that break your synchronization then um all bets are off okay now another one is a scheduling constraint where we set things to other than one here i'm going to set it to zero and um what's interesting about this is this lets you do a join operation on threads for instance so if you notice the thread join if we set it to zero the thread join which might be a parent process or something is going to put itself to sleep because it's going to try to do a semaphore p on a zero that'll go to sleep but then when the finish shows up it'll do a 74v which will increment thread join and go forward okay now the bounded buffer we're going to revisit so we need some correctness constraints okay which are for instance the consumer must wait for the producer to fill buffers if there aren't any and the producer has to wait for the comp consumer to empty buffers if it all is full and only one thread can manipulate the buffer at a time that's mutual exclusion so this last one is basically saying we need a lock in order to keep the queue itself consistent okay the other two are actually correctness constraints and one is about the entry to the queue and the other is about the exit to the queue and we need a constraint on either half okay and if you think about it that's going to be true because when we need a fret on or we need a constraint on either half because these constraints are uh like half constraints right there they're about something going below zero uh so we need to arrange so that we can put things to sleep either if the buffer is full or the buffer is empty now the mutual exclusion is really just about um trying to make sure that uh we keep the queue valid okay and a general rule of thumb is to use separate semaphores for each constraint so we have a constraint for how many full buffers there are a constraint for empty buffers and it's constrained for the mutual exclusion and that's going to produce it this way we're going to start the number of full slots on an empty machine is zero the number of empty slots on a empty machine is uh all of them buff size and the mutex is one because it's like a lock and so the producer is going to uh look like this where we are coming along the first thing we do is we say uh wait until there's space so notice that if there are no empty slots then empty slots will be zero and this semaphore p will put us to sleep okay otherwise uh if we get through there we'll decrement the number of empty slots because we're about to add coke and then we'll grab a lock and cue an item release the lock and at that the final thing we'll do is we'll increase the number of full slots okay and then the consumer is exactly the reverse which is again we're going to say are there any full slots at all and if the answer is there are no full slots it means there's no coke so you as a student have to go to sleep otherwise if we pass we decrement the number of full slots we grab the lock we dq an item safely because we have the lock we release the lock we increment the number of empty slots because we removed a queue all right and there's three different things to look at so the critical sections in the middle here are locking and they're about keeping the queue consistent so we could even put red black heaps or anything we like in this if we wanted right the full slot incrementing with semaphore v is about waking up the consumer and the empty slot semi4v is about waking up the producer okay questions okay good so why is there this asymmetry well the producer does 74p in the and the consumer does uh summons for p on empty buffers and 74v on full buffers and we reverse the answer is uh because they're doing symmetrical but opposite things okay and um is the order of the p's important and the answer is yes if we do a reverse like this i've shown here we actually get deadlock okay and if you think that through that's pretty simple the producer grabs the lock but then goes to sleep on empty slots that means the consumer could never grab the lock to add something to the queue and so we're basically stuck okay um can you only have one semaphore for both consumers and producers um so this is not gonna work easily okay you can think through there might be a solution that would do it but it's going to be more complicated than this one okay is the order of the v's important uh no it's going to affect scheduling a little bit okay what if we have two producers or two consumers the solution will just work all right now administrivia as you know there's a midterm coming up um on october 1st so that's a week from thursday so it's getting a little closer we've talked a lot about this uh it's going to have synchronization um there's scheduling on the uh schedule but that's not going to be part of the exam uh there's a there's a midterm review uh next tuesday that is from 7 to 9 pm apparently we don't have any more details on it yet but i'm sure they'll be announced when we have them okay so i want to just dive ahead now to say now where are we going with synchronization we're going to implement various higher level synchronization primitives using atomic operations to try to get us toward writing correct code okay but we're gonna start with hardware what can the hardware do to help us build locks okay and what you're gonna we're gonna start with talking about loads and stores and then move forward from there and once we've got it figured out kind of how to get synchronization out of hardware then we're going to build interesting locks and semaphores and monitors and so on okay and then finally we'll be able to write good shared programs okay so we need to start with this hardware question because right now we've been talking about synchronization and it's been floating in space literally i mean because we have lock acquire and release well how do you do that we've got uh you know semaphore v and p okay how do you do that you know yeah you could use a library but let's be a little more sophisticated and dive into how this is actually implemented so our motivating experi example here is going to be the too much milk example which is kind of fun so a great thing about operating systems is the analogy between problems in the os and real life are uh often very good and they'll help un understand things a little bit uh the downside is that people are much smarter than computers or computers are much stupider than people and so you need to be careful yeah move okay so the example here is uh you're living together with other students and you have a shared refrigerator okay and the first person gets home and you look in the fridge and you're out of milk okay and so what happens well because you know you have a good contract with your roommates you leave for the store to get milk okay and you arrive at the store at 3 10 but meanwhile your other uh person comes home and they look in the fridge and they're out of milk and my while you're buying milk uh at 3 15 they're leaving for the store and we'll assume that you guys are going in opposite directions you won't run into each other uh the first person gets home at 3 20 puts the milk away person b gets to the store 320 they buy milk they arrive home put the milk away and now you have too much milk okay so this is uh a disaster of epic proportions of course and so the question is what can we do to make this work now this is a pretty simple um solution okay and now the idea of leaving notes um sounds like it might be a good idea putting your roommate to sleep uh perhaps that's a good idea you know i i i don't know about you guys but some of you might actually have roommates that are 180 degrees out of phase with you um as far as sleeping schedule uh but the question is can you have too much milk i guess okay and so um to remember to to start thinking about this remember we've been talking about locks right a lock is basically preventing somebody from doing something okay and you lock before entering the critical section you unlock after leaving and uh you wait if locked okay and remember the most important idea behind synchronization is that all synchronization problems are solved by waiting in one form or another the trick is to wait as little as possible or if you're forced to wait for a longer period of time don't steal cycles don't waste cycles basically let somebody else run okay but it's all about waiting uh cleverly okay and so for example we could fix this milk problem by putting a key on the refrigerator you lock it you take the key and you go buy milk okay now i don't know about you but i suspect this fixes too much right because if your roommate only wants orange juice um then that's a problem okay so uh of course we don't know how to make a lock yet so let's see if we can start answering this question and what are our correctness properties here we need to be very careful about the correctness of concurrent program since they're non-deterministic okay and so um the impulse is to start coding first and then when it doesn't work you either pull your hair out you can see how well that worked for me or um you can uh try to come up with a actual set of correctness constraints first all right and i highly encourage you guys to do that okay think first code later always write down the behavior so what are the correctness properties for the too much milk problem never more than one person buys somebody buys if needed okay all right and uh i will say by the way that hair is far overrated so but the first attempt is going to be restricting ourselves to only using atomic load and store operations as building blocks so let's assume the only thing that we've got that's atomic to start with are loads and stores and just remember what that means is when you go to do a load all of the bits load all of the bits load from memory at once you don't get some of the bits okay and store all of your bits get stored at once okay all right so can we do something with that so here's our first solution to the too much milk uh solution the too much milk problem and yes indeed all of those who said let's use a note sounds like a good idea so we're going to leave a note before buying it's kind of like a lock right and we're going to remove the note after buying it's kind of like unlocking and if there's a note you don't buy you wait okay so this sounds great the problem is that if a computer tries this perhaps this is not going to work so well so here's our our code right if no milk uh if no note leave the note buy milk remove note okay so this looks like a first solution okay let's look a little more carefully at this unfortunately so we have thread a and b so you know thread a says if no milk but then thread b gets to run because remember the murphy's law scheduler says if no milk if no note and then the scheduler comes back and a says if no note at which point thread a leaves the note and they go to buy milk and they remove the note and meanwhile thread b has gotten past the if no note and now they leave a note by milk renew remove note and uh if we were to uh just be pretend to be computers then we didn't solve any problem here okay so the um the key thing um here is that you got to think like computers rather than like people okay yeah tldr don't be a computer unfortunately you're going to be designing code that is running on a computer so let's see if we can figure this out um and the result is really that there's too much milk but only occasionally so what we've done is we have taken what was almost guaranteed to be broken and we've made it less broken okay but less broken is kind of like uh you know uh less uh disaster from you know a nuclear explosion uh you know maybe it doesn't happen as frequently but when it does it's bad okay and so you know synchronization problems that happen less frequently are far worse than ones that happen frequently because the frequent ones at least are uh you might have a chance of um finding out what's going on okay um so does everybody see why this only happens occasionally because this does mostly work okay it mostly takes care of our problem because you mostly won't get a switch right at the wrong point here and so the note will mostly do the right thing okay yeah too much milk is the nuclear option here right so the thread um gets switched after checking milk and the note but before buying the milk that's a it's unlikely okay but it's still uh not good okay so the the problem is worse now because it's failing intermittently and i um you're at the beginnings of the joys of multi-threaded uh computation and um this is going to be great but you got to learn how to synchronize and by the way okay you can never have too much milk maybe i maybe i am wrong to the person who loves two gallons of milk but if you were uh if you ended up with four gallons instead of two that might be a problem so um what can we do so i saw somebody in the chat maybe suggest two notes right one for person a one for person b um but before we try that what if we try something else what if we set the note first okay so let's leave the note then we say if no uh milk if no no by milk remove note does this work what do you think does this work well there's only one note yeah so what happens here well with a human probably nothing bad with a computer nobody ever buys milk right because what happened is we we left a note we checked to see if there's no milk and then we say well if there isn't any node go buy milk but there is a note right and then we remove the note so this solution one and a half is it's not bad it's not any better in fact it's uh now there's no milk okay so uh that's that's worse i would say so let's try our second solution which is two notes so thread a uh leaves node a and thread b leaves note b and thread a says if there's no note from b then we go off and buy milk and remove our note and thread b says if there's no note from a we uh go off and buy milk and renew remove our note now what does this work okay yeah good they could each leave their note just before checking for the notes right so uh it's possible for neither thread to buy milk right and so uh contact switches at exactly the wrong time remember the murphy's law scheduler and uh this leads each thread to think the other one's doing it okay so this is really insidious right because this would happen but at the worst possible time and uh there was a time in the early days of unix where there was various problems that could only be solved by either rebooting or would occasionally cause a crash maybe once a week okay and um that's an issue okay so uh i'm experiencing one of those with a new network switch that i just purchased that occasionally loses its uh it's got a memory leak and so it eventually crashes every eight days which is a little bit annoying that's a very rare synchronization problem of some sort right yes and so you could say this is uh also similar to what happens with humans but um you know this is the i'm not getting milk you're getting milk okay and uh this is actually a type of lack of called starvation amusingly enough so this actually works out pretty well all right so this isn't helping us how about this one now i'm going to leave this up for a second for you guys to really digest right so we still have two notes okay and um milk is better for you than water by the way so um so thread a says unless you're lactose intolerant which it's not but thread a leaves a note it's note and then it says well uh note b is there do nothing notice this is a spin and then if there's no milk by milk and then remove note a and then thread b does not do a parallel thing it does something slightly different right it leaves a note with its name on it and then it says if there's no note a go buy milk and then remove its note so a and b have different code okay so does this work so i'm going to tell you yes but what do you think so both thread a and thread b can guarantee that it's either safe to buy or the other will buy and it's okay to quit looking okay so for instance at x here that's what this uh x means if there's no note b we know for a fact that it's safe for a to buy because a's already left a note so if there is no note b then there's no way for thread to leave a note and not notice a's note okay as far as y is concerned if there's no note a okay then we know for a fact that because we've left note b that a will either have not been in this code at all or it will be spinning while we're off buying milk and so it will not try to look for milk until after we come back and remove the note okay so it works how many of you feel so fulfilled by this solution hopefully not too many of you maybe this will convince you to uh give up milk i don't know um let's take a look at this though for instance leave node a happens before uh if no node a in which case uh we can busy wait here okay and we'll wait for node b to be removed and which case uh we can now check the milk and we'll know that uh there's no way for b to be in the if no milk by milk at the same time a is and vice versa if we leave note b and we say if no node a um as long as the if no node a happens before node b is left then we know for a fact that we can go in and buy milk and a will be caught up in this while loop um while we go by the milk and then when a finally gets around to looking there's already milk okay so what do you think i mean you could write code like this okay and in fact this generalizes to end threads so for those of you that are living in sororities or fraternities you're okay because we can handle end people there's even a paper on this uh for you know a solution to dijkstra's concurrent programming problem by the way leslie lamport has written some of the most interesting theory papers uh that you'll run into we'll talk about a couple of them at the end of the term and if you take 262 with me when i'm teaching it you'll learn about several of them but yes this this generalizes okay so you know our solution protects a single critical section piece of code which is a if no milk by milk great but isn't that the way we were thinking about locks before when we were thinking higher level so we had to go to a lot of work to get the locks working and so the question might be well wait a minute so does that mean kubi you're saying that professor kuby that somehow all of this stuff is an acquire and this is a release and that's an acquirer and that's a release except it's not the same for thread a and b so that doesn't sound like this is a good solution okay why don't you hold off on implementing this at your sorority until we give a better solution um so it looks like a lock but it's not very easy right and solution three works but it's very unsatisfactory it's really complicated a's code is different from b's which would be different from c d e f and g um and even worse or not maybe it depends on what you think is worse a is waiting by spinning right so we've got that thing i told you you're not allowed to do i'll show you right here busy waiting okay so a doesn't go to sleep when it's waiting it's busy waiting so this is not a good solution either okay it's wasting time spinning is another word that's used for that okay so that's not good either so there's got to be a better way okay and first of all we have to expand um our set of primitives from just loads and stores to something else okay and it isn't it is interesting that the original mips processor designed by uh hennessey down at stanford um didn't actually have anything other than atomic loaded store and it turned out that that ended up being way too complicated to design operating systems and user code and so subsequent versions of mips actually had some atomic instructions of the form that we're going to talk about here so we need something other than loads and stores and then we're going to use those to build higher level primitives okay and so what we want to do let me just refresh your memory is we want something like acquire release where this is fully symmetrical no matter how many threads there are okay and we would like something by the way also that would allow us to have multiple locks so we could have a milk lock and an oj lock and whatever else yogurt lock okay and then our milk problem is very easy right acquire the milk lock if no milk by milk release the milk clock okay so all right everybody with me on this okay so the difference between busy waiting and what a semaphore down does is uh the following so a semaphore down is unspecified whereas busy waiting is guaranteed to be a bad idea a semaphore down you should assume puts the thread to sleep and lets a different thread run okay so what i gave you when we talked about locks and semaphores before we dove into the implementation here was the assumption that when you're waiting you're actually put to sleep and not wasting cycles okay so the opposite of busy waiting uh is sleeping okay so they're not both looping until something happens all right looping still uses cycles uh sleeping doesn't and so going back to the analogy from the beginning of the lecture if you remember we talked about those multiple threads and switching from one thread back and forth together from s and t the way you put something to sleep is you take it off the ready queue and you put it on a weight cue so that the scheduler doesn't give it cpu cycles at all okay it yields control of cpu and that means when somebody releases the lock it's going to wake wake them up and bring them off of the weight cue okay so that's where we're going okay that's where we're going so everybody got the difference now between spin weighting and sleeping and the semif then the interfaces that we gave you for both locks and semaphores could spin weight when they're waiting but that would be a bad implementation okay now how do we implement locks so lock prevents somebody from doing something you lock before entering unlock leaving wait if locked the atomic load store gets a solution like milk three that's not good so what about a lock instruction so what if we had an instruction such that uh when you execute the lock instruction it does a lock and then there'd be a heart you know an unlock instruction is this a good idea it certainly would prevent us from having to build these complicated dijkstra style things out of loads and stores okay so i have somebody that says it's slow it turns out not necessarily i mean it probably is complicated enough to be slow and that would be a good 152 answer there's something fundamentally more complicated about this what part of locking doesn't seem like it corresponds well to a lock instruction can anybody think not so sure so by the way yeah exactly putting the thread to sleep and those of you that maybe we're looking uh that's a good answer okay so what about putting it to sleep so the problem is putting a thread to sleep is complicated okay it requires knowledge of the current operating system it requires you to know how the threads look on the stack it would require you to know where to put stuff and so trying to have a hardware instruction that handles the sleeping part is really complicated okay and in fact you really don't want a hardware instruction that does that because that would then force you to use you know a particular version of sleep which is that makes no sense that would prevent you from using different operating systems and by the way um the complexity or slowness that was brought up uh by by another person in the chat is also correct um you know the intel 3 432 you can look it up had all sorts of interesting things it had uh hamming coded or excuse me huffman coded instructions so that they were only as long as necessary it had all sorts of really complicated stuff it also had a bunch of different hardware lock instructions you don't find them other than in computer museums because it was just too complicated and there was really no point so we want to do something better something simpler okay so let's try uh interrupt enable and disable so we know we can do that right so that's where we set a bit in the processor that says ignore interrupts and then if we turn off interrupts then because the timer interrupt isn't going to happen then we won't switch from thread a to thread b and potentially we could you know get enough atomicity to do something in a critical section okay so on a unit processor perhaps we could avoid context switching this way uh no internal inter uh events so the the the the um thread that's in the middle of a critical section doesn't do i o or anything disables interrupts it does some uh operations and then re-enables interrupts and as a result we could um actually end up with some sort of critical section okay so here's a naive implementation of locks so the acquire says disable interrupts the release says enable interrupts anybody think is this a good idea okay so somebody asked what happens if you should error seems like too much power well this isn't gonna you know take a lot of power um but here's some problems you can't let the user do this right because if the user could run our lock acquire operation which disables interrupts they could crash the machine right while true okay the other uh thing is that um as mentioned you can only have one lock right there's only one lock in the system this way that's good and the other is if it's a real-time system and you're busy in a very long critical section this could be bad right you know what happens if you're in a critical section and you get the uh you know nuclear reactor is about to melt down hurry up help help help uh and it's being ignored okay so that could be a problem so this seems like this is not good all right what can we do that's better here okay let's use disabling of interrupts but instead of using it as the lock let's use it to implement a lock okay so this is a little different and so here's what we're going to do we're going to have a we're going to have a value in memory okay so this is just a memory instruction i've called it value and we're going to set it to free you can think of this as a binary zero or one and that's going to be our lock so assuming this all works out we could have as many locks as we have memory location so that sounds good and the way acquire is going to work is we're going to disable interrupts first and then we're going to say well if the value is busy we're going to put the thread on a weight cue go to sleep re-enable interrupt somehow otherwise we're going to set the value to busy and then re-enable interrupts okay so notice that acquire only disables and re-enables interrupts for a very short time that very short time is just long enough to see what the state of the lock is possibly alter the state a lot the lock or go to sleep if we're we can't acquire it okay and the flip side of release again disables interrupts just long enough to see whether somebody's waiting on the weight queue if they are we go ahead pull them off the weight queue and let them run otherwise we say the lock is free and we re-enable interrupts okay so the difference here between using the interrupt disable and enable as our choir and release is we're using the interrupt enable and disable to implement acquire and release and fundamentally what's different here is the fact that we are have a very short critical section here from the standpoint of interrupt so we disable interrupts we do something really quickly and then we re-enable them okay so the interrupts are never disabled for a long period of time but the user of this acquire and release could take as long as they want okay now why do we need to disable interrupts at all well this is to avoid interruption between checking and setting the lock value so if we get a synchronization problem in our implementation of a lock we would have a bad result and so this disabling and enabling helps us to make a good implementation and then we can give the acquire and release to our users okay all right now this still has some problems that's okay we'll get we'll fix some of them but i want to understand this solution first so we need to disable interrupts for our actual implementation um and the critical section with respect to the interrupts is inside here but that critical section is for implementing acquire and release now if you look here there are some funninesses here by the way the previous solution critical section inside the acquire this is very uh short in here unlike the previous solution so um person using this lock can take as long as they want with the lock acquired because they are not going to impact the state of the nuclear reactor so we're probably okay there okay now uh but there's a problem here that's a little funny so what about re-enabling interrupts when going to sleep if you look here what we've got is a situation where um if you disable interrupts uh and then you say well if the value is busy we have to put the thread on the wait queue which is somehow putting it to sleep and then actually go to sleep the question is when do we re-enable interrupts okay if you look uh here if the value's busy um what you see is that uh we're gonna do something funny in here because we're actually gonna go to sleep with interrupts disabled and that's gonna be bad right so that's uh that's an issue we can't go to sleep with interrupts disabled because that will just it'll sort of uh invalidate our whole solution so could we put the thread to sleep at this or re-enable interrupts at this point well we can't because if we do that we re-enable interrupts at this point that it's possible that uh just before we put the thread in the weight queue the malicious scheduler calls the other thread which releases and then we come back here and we put the thread in the weight queue and go to sleep even though the lock is free okay so we can't re-enable interrupts there uh could we re-enable them here well the same problem here we put ourselves on the weight queue we get re-enabled we go to sleep okay so we need to somehow wait until we're actually on the weight queue and asleep before we re-enable interrupts so that if the other thread then releases uh it will be able to wake us up okay so um but what does that mean that means we have to re-enable interrupts after going to sleep okay so that seems like a problem right it seems like uh that doesn't make any sense okay but it seems like it's required for the correctness now how can this possibly be correct well the answer is if you look in the scheduler and you're going to become very familiar with the scheduler once you get to project 2. i'm going to give you a little preview is thread a is executing that acquire and making the decision right here that it's going to have to put the thread in the weight cue and re-enable interrupts what does that really mean well typically in the scheduler what happens is you disable interrupts and you go to sleep okay but at that point you contact switch by switching from thread a to thread b which then re-enables interrupts executes for a while disables interrupts context switches returns et cetera so if you think back to that s and t right s runs and then it hits switch goes over to t and then returns up when you're in the kernel in the middle of the scheduler you do so with interrupts off why because if interrupts if the switch routine is interrupted in the middle of saving registers and you go off and do something else it's going to completely screw up all the register state so interrupts have to be disabled in the deep parts of the scheduler already and so what we're actually seeing is the way this thing works here is we put the thread on the weight queue go to sleep interrupts are disabled in that part of the kernel and so when we go to pull somebody else off the ready queue and run them interrupts are disabled and when they start running that's when that other thread re-enables interrupts so the way we solve this little conundrum is exactly that it's the other thread that gets to run after we go to sleep that will re-enable the interrupts okay so this is your little uh mental puzzle for the night to figure out why this works okay so again we have the interrupts already disabled we made the decision that we're trying to acquire the lock we're gonna have to be put to sleep that really means that inside the scheduler we put ourselves on a weight queue and then we uh go to do the switch but that switch is already running with interrupts disabled we restore the registers we return from the context switch uh to to the kernel and we work our way back up to user level which will re-enable the interrupts and then we run thread b at user level now this is challenging the first time you see this okay now what i want to do though is i want to actually show you this with a simulation because everything's better with a good simulation right so here is an example of an internal lock simulation now can anybody say why i'm calling this an in-kernel lock that we're building right now so first of all in answer to the question in the chat yes we don't have to actively re-enable interrupts in that sleep portion of the acquire because the other thread will re-enable them good but the answer to my question of why i'm calling this a kernel in kernel lock is we cannot give interrupt disable and enable to the user we already know that so whatever we're coming up with only works inside the kernel for now okay we'll deal with that later but here we have thread a and b and they're going to synchronize with each other okay and so what i'm showing you at the top here is some states so the value of the lock itself is either 0 or 1 depending on whether it's free or busy we have some number of people that are waiting on the lock and we have the current owner of the lock okay and we have uh the current state of thread b so thread b is on the ready queue thread a is running so remember we alternate between ready and running for threads that are active okay and if you notice the other thing is this owner who owns the lock is going to be just for our own edification okay there never actually is in this view of the lock an owner that's tracked now there are some versions of locks you'll run into where it keeps track of which thread owns it but it's not required for this okay so this owner is going to be purely for our simulation here so here we have thread a's running thread b is ready so totally ignoring any acquire or release we're going to alternate between a and b because we've got our we've got our scheduler working okay but thread a runs and hits a lock acquire okay and it's going to go to the acquire code which as you saw from earlier says disable interrupts okay that's what that little red dot means so interrupts are now disabled notice that the integer the uh the value is zero okay so we say is value equal to one nope because the lock is free at which point we set value equal to one and now i'm going to say that the owner is a but in fact as i told you this is only for our simulation because we don't actually have to record who the owner is all right so now that we've got value uh equal to one we now the lock is busy that's because it's one we turn interrupts back on that's a little green dot and then we uh emerge from acquire so notice the key interface with lock acquire is all the threads that are waiting are sleeping inside the acquire and they only emerge from require a choir after they've acquired the lock okay and so the fact that we returned from lock choir means that we have the lock okay so how do we know we have the lock we emerged from lock acquire we came we returned okay and now we're busy executing the critical section okay pretty soon what happens is the timer interrupt goes off and we're about to switch from thread a to thread b okay timer interrupt goes off that that's what this dotted line is the timer code um and that timer code is going to uh set interrupts it's disable interrupts excuse me that's why there's a little red dot and then the scheduler is going to look at which thread is on the ready queue while the thread b's on the ready queue so we are now going to put thread a on the ready queue so notice how it says ready and it's on the ready queue we're going to take thread b off of the ready queue and it's going to start running so here's a situation where thread a has the lock okay the lock is acquired uh fred a's on the ready queue so it's not actually getting cpu cycles but it's got the lock thread b is running okay and notice we've re-enabled interrupts and now thread b is the one getting the cpu so right now there's no stopping no blocking because a is in the critical section with the lock b is not trying to get to the critical section yet so we're good now of course this wouldn't be fun if we didn't start getting some conflict going on here with the two threads and so all of a sudden um thread b hits lock acquire and now what well we know from what i told you earlier that thread b needs to go to sleep okay because it can't acquire the lock because a's got the lock so let's see what happens here so it calls um okay uh lock acquire so the question here um let me see is why don't we set the value to one after waking up from sleep and acquire well we set the value to one right away because we have to indicate the lock is busy okay you're gonna see in a moment why that's important because when b tries to acquire the lock the fact that the value is one and not zero that means that the lock is taken okay so uh we have to take we have to set the value to one because that's the lock okay so when we try to do lock acquire we disable interrupts we're gonna run this lock acquire code it's gonna c is value equal to one yes so notice that because value is equal to one we're um well the lock is taken so we got to do something okay what happens here well we got to put ourselves on the weight queue and go to sleep so notice at that point we're uh on the weight queue so this lock has a whole set of waiters potentially right now it's just us what does it mean that there's a yellow weight it means that thread b is no longer on it's no longer going to get cpu cycles and it's no longer going to even be on the ready q because it's waiting so why by putting it on the weight queue taking it off the ready queue means that it's not going to get cycles because it's actually sleeping okay so b is now sleeping on this weight q and now what happens is we go through the go to sleep which is going to go wake up a okay which in the process of running a now taking a off of the ready q and putting it on the cpu we re-enable interrupts and we start running again in the critical section so notice that the scheduler took us over to run thread b but thread b tried to acquire the lock which put it to a sleep on the waiter and now a gets to run again and in this very simple simulation there's only a and b okay all right and now we run and we're about to release the lock okay so when we release the lock we thread a is now done with the critical section it's got to wake up b and tell it well you can go now right because if you look at the way release runs let's run this code here the release code is going to disable interrupts okay that's because we're messing with the implementation of the lock we're going to say is there anyone on the weight queue and the answer that is yep there's somebody on the weight cue so what we're going to do is we're going to put them on the ready queue okay and the act of putting them on the ready queue so that they can now return from the lock acquire means that we've implicitly given them the lock notice we didn't change the value from one to zero and back to one again we left it equal to one but the fact that we're now allowing b to run means that b now has the lock that's why if you notice i switched this little owner pointer from pointing to a to pointing to b the owner isn't a real thing it's just for us to keep track of what's going on here okay and so thread a uh basically puts uh b on the ready q re-enables interrupts starts running again okay now notice that b doesn't run immediately because b is on the ready q a gets to run for a little longer okay and eventually the timer goes off and it's time to schedule uh b to run again and notice we'll pick up b where it was left off okay which is it's going to come out of sleep and we're going to put um a on the ready q we're going to run b it's going to come out of sleep it's going to re-enable interrupts it's going to emerge from the lock acquire and voila we get to run the critical section okay and so this shows you hopefully uh how this particular implementation can work if we have the ability to enable and disable interrupts okay so can there be threads on the weight queue for a different reason than trying to acquire the lock so uh the answer to that question is no but it's not for the reason you think there's many weight cues okay many weight cues uh and you're on the wait queue for the particular thing you're waiting for so in this case you're on the waiter cue for this lock if there's 12 000 locks there's going to be 12 000 weight cues one for each lock because otherwise when you go to wake somebody up you won't know now the other question is why didn't we set value to zero and the answer is we didn't set value to zero because a woke up b and handed it the lock which means the lock is still busy which it means it's still equal to one okay we would only run if you look at this arm of release here down at the bottom if there's nobody on the weight queue and we skip on this first arm then we will set value to zero okay now the question is what if the timer went off right after b was placed on the ready queue but before a enabled interrupts so the answer is the timer can't go off look at what you just said there timer can't go off well interrupts are disabled because the timer won't go off good and by the way in case you're worried a lot about that uh if you're thinking this through uh further you might say well what if the timer went off and the interrupts were disabled and i missed it you know i'm very sad i miss the timer so the answer is that's not how it works so interrupts that are disabled are merely deferred until you re-enable and then it'll go off okay all right good now um let's think about this for a second so this lock acquisition that we're looking at here uh we can't actually put this implementation at user level we'd have to run this in the kernel because we have disable and enable okay of interrupts so that's a problem with this now what you could imagine pretty easily is we could make a system call acquire and release system calls that basically take a lock identity of some sort and do lock acquire and release okay so that is going to be our first thought of how to do this properly so clearly by going into the kernel we can actually put the thread that uh is waiting we can put him to sleep because the kernel according to what you've learned so far is the thing that puts threads to sleep okay so the interesting question is doesn't be put itself to sleep well sort of except that what happens here is b's running and when it's in the kernel it calls the right part of the kernel to put it to sleep but it can only do that because it's running on the kernel thread part of b so it's in the kernel and you uh you can choose if you like to think about uh putting it puts itself to sleep and gets woken up by somebody else i think that's a that's a deep philosophical question uh if you like but in fact it's the fact that um b made a system call if it's user code into the kernel that it could even run this code okay in order to make this work so in order to put things to sleep right now uh we're gonna need to enter and exit the kernel okay and if you hold off for one second here i want to finish up this thought so if you remember we talked about multi-threaded servers now where we might have a master thread that cues up a bunch of results and we have a thread pool which is a queue of pending requests so this idea we talked about briefly with web servers and so on if these threads are running at user level the way they have to lock and unlock shared resources is they have to go through that common system call so that they're in the kernel and able to run that code and so we could have a very simple um a very simple performance model here which is given that the overhead a critical section is x we can talk about the time to contact switch acquire the lock and so on do some work and then uh contact switch again release the lock and so on there's a couple of system calls involved in this okay and so even if everything else if we have a thousand threads and everything else is infinitely fast the fact that our lock implementation has to go into the kernel means that things are fundamentally slow okay so what's the maximum rate of operations we could have well if every thread has to go into this kernel then we come up with x the maximum rate of threads could be one over x okay and that's going to take a really long time to do that synchronization so if you remember we talked about jeff dean's numbers if x is a millisecond to go into the kernel and come back that's only a thousand synchronization ops per second and if we have a lot of threads that may not be enough okay and so we got to do something better than going into the kernel and for instance we might want this uncontended case where lots of threads are all grabbing and releasing locks but the locks are unrelated to each other we would like them to be able to go as fast as possible all right and so that's going to be our goal and it's clearly going to require something other than going into the kernel okay and we talked briefly about this i showed you this diagram in a different context but it shows you the difference between a system call is about 25 times the cost of a function call so whatever we do to synchronize ought to be something that doesn't require us to go into the kernel to disable interops and potentially put us to sleep okay all right so to do that we're going to talk about [Music] next time we're going to talk about atomic read modify write instructions that can run at user level okay and so problems with the previous solution can't give the lock implementation to users it also doesn't work well in a multi-processor so i don't know if you thought this through for a moment but if i have a bunch of cores and i disable interrupts on one that doesn't disable interrupts on another okay you can do a cross core disabling of interrupts but that's very expensive and so you don't want to do that okay and so we need something that would actually work on a multi-core and so the alternative is going to be these atomic instruction sequences these instructions read a value and write the value atomically and the hardware gives you this atomicity so we gave you loads and stores we said that that was messy right because we got the dijkstra solution that was kind of a mess lampport gave us the generalization we uh talked about interrupt disabling and enabling but it's not general enough and you can't give it to users we need some other atomic sequence okay and that's going to be for next time and the good things i'm not going to don't worry about this now we'll talk about that first thing next lecture on wednesday but for instance test and set is a good example of one that's particularly useful and so here what you do is you give it an address and what it does is it grabs the value in a memory location and stores a one and it does that atomically in a way that can't be interrupted and it turns out if you do that then a test and set on a memory location becomes a synchronization op that you can use to make a very uh simple lock okay and that will be for next time all right so in conclusion we've been talking about atomic operations we talked about the uh difficulty of uh basically having multiple instructions that we need to treat together and so we need locks around it at minimum to make a multi-instruction atomic operation we started talking about atomicity primitives like in interrupts and so on and uh we we showed you several constructs for locks we haven't gotten to some interesting other atomicity primitives that's for next time like test and set and swap and compare and swap we'll get to those we've started our implementations of locks and we're going to uh continue with that next time okay and so um let me briefly see here uh timer enough's allowed in these disabled blocks okay so only so in i will say by the way for the questions that are on the chat uh when interrupts are disabled you don't get timer interrupts in there that's the point um and so when you re-enable them the timer interrupts show on so we're going to start talking about uh synchronization that's going to be these other atomicity primitives that are going to allow us to construct locks at user level all right you guys have been held you for too long i hope you have a good night and we'll see you on wednesday you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_13_Memory_1_Address_Translation_and_Virtual_Memory.txt | welcome back everybody um to cs162 so um we're going to move on to start talking about address translation in virtual memory now uh but if you remember from last time just to remind you we did talk about Deadlocks and uh basically we were distinguishing between uh Deadlocks and starvation so starvation is a general situation where the thread weits indefinitely uh maybe a low priority thread waiting for resources constantly in use by high priority threads deadlock is a particular type of starvation where there's a circular uh waiting condition that's not going to resolve by itself so in the case of sort of generic starvation there's a good possibility that uh for instance if all the high priority threads went away then the low priority one would get to run the case of Deadlock because of this circular weight Condition it's never going to resolve here was the picture that we kind of used here where thread a is waiting to acquire resource two um it's already owning Resource One and thread B is waiting to acquire Resource One but it already owns resource two and this is the circular waiting that we talked about and um of course deadlock is a type of starvation but not vice versa so then we went and uh gave some examples and came up with four conditions uh for deadlock now these are necessary but not sufficient so uh you need you need to have all four of these and uh that doesn't necessarily mean you have deadlock but if you have all four of them you might have a chance for deadlock Mutual exclusion said basically that um a resource can be held exclusively by a thread and uh held on to while it's waiting for other things hold and weight is exactly that that the thread holding at least one resource is waiting for another one uh it's not possible to preempt or take away resources from a thread and then finally the circular weight condition says that you have a set of threads T12 3 through up to n that are all waiting for each other in a cycle and uh if you have all four of these things there's at least the possibility of uh having deadlock okay and we talked about a an algorithm for generally figuring out whether you're stuck in a deadlock situation or not and then we uh talked about a number of ways to avoid deadlock and so on one of the things we talked about however was the Banker's algorithm and the Banker's algorithm is a way of dynamically handing out resources so that you won't get into Deadlock and the assumption is basically that every thread uh pre-specify a maximum number of each type of resource that it needs um doesn't have to ask for them all at once it can ask for them dynamically and now the threads can request and hold dynamically their resources and the Banker's algorithm will make a decision whether to hand those resources over based on whether it's going to deadlock or not uh and it's basically uses the deadlock detection algorithm as like a you know a sub routine as part of that as we showed you and for each request uh that a thread makes we do a thought experiment we say well if we gave this resource to the thread uh would there still be a way to get all of the threads to complete without deadlocking and if the answer is yes then we would hand the the resource out and if the answer is no then we' put that requesting thread to sleep and um so basically just as a summary here the Banker's algorithm is preventing Deadlocks by stalling requests that would lead to uh inevitable Deadlocks we called those unsafe States now notice that the Banker's algorithm doesn't fix all of the problems because if a thread grabs a bunch of resources and then goes into an infinite Loop uh the Banker's algorithm doesn't help you with that okay um I gave uh one very simple example of the bankers algorithm here uh but I didn't call it using the bankers algorithm when I did but here was this example of two threads that happen to be uh asking for locks but they're doing it in an opposite order order so thread a asks for the uh X and then the Y lock and thread B asks for the Y and then the X lock and as we talked about last time it's possible to get that stuck in deadlock but if we actually have the Banker's algorithm running then a says uh G Banker's algorithm I'd like to acquire X and the Banker's algorithm says well go ahead because you're not going to deadlock anybody however as we talked about last time if B asks for y and gets it we are now stuck we're not quite deadlocked yet but no thread is ever going to be able to make progress and so we're in an unsafe State at that point what happens with the Banker's algorithm and this Banker's algorithm does that thought experiment it says well suppose that I gave thread b y and if I did what would happen and if it does that thought experiment it'll find that A and B are now deadlocked if it does that and so rather than handing y to B it actually puts B to sleep what does that allow well that allows a to go ahead and get y finish up and release the two and then B can be woken up and it can move forward okay so uh basically that's an simple example of how the baker's algorithm could pre prevent this particular deadlock all right and we also talked about a couple of uh algorithmic responses to things like the uh the dining lawyers problem we talked about uh where they're each grabbing two chopsticks and how to prevent that basically coming up by uh anal anzing it with the Banker's algorithm so you don't necessarily have to have the banker algorithm running dynamically live you can actually use it to analyze an algorithm and figure out how to prevent Deadlocks okay so um I'm going to stop there I wanted to briefly see if there were any questions on Deadlocks we um I definitely recommend you take a look at last lecture because we talked about uh a number of different examples of Deadlocks and how to avoid them um in different ways and of course the simplest way without the Banker's algorithm to fix a this A and B deadlock does anybody know how would we prevent uh A and B from ever getting into deadlock without having to use the Banker's algorithm yeah so we basically pick an order call it XYZ whatever Dimension order and you always request resources in that order so a would get X and then Y and B would get X and then Y and you can prove that for instance there's no way to ever get a deadlock because to get a cycle one of the threads would have to uh have y and request X which is going back in order and so you can actually do a proof that shows that there's no deadlock there good all right so we're g to move on we've been talking a lot about virtualizing the CPU and it's time to move on a little bit to some other resources but in general you know different processes or threads share all the same Hardware um you need to Multiplex the CPU so that was scheduling and and uh and some of the lower level mechanisms we talked about you need to Multiplex memory we're going to start today um you need to Multiplex dis and devices that's a little bit later um we'll even talk about virtual machines where you essentially virtualize uh a hardware view of the whole machine we'll talk about that a little bit later as well but today we're going to focus on memory and why do we worry about memory well uh the complete working state of a process is defined by its data and memory and registers so if you were to uh take all of the state in the system and you were to put it aside somewhere and then you were to you know throw out the CPU and get new ones and reload them all up you'd be able to pick up where you're left off so memory is pretty important it actually represents the the actual running state of the system and you can't among other things for instance let different threads use the same memory because what you're going to end up with is either interference uh or you're going to have a situation where private information gets stolen Etc um I like to sometimes think of this in terms of physics where two pieces of data can occupy the same location in memory okay now if you uh guys were to get me to talk about Quantum Computing at the end of the term where we sort of have random random topics um I might VI modify that a little bit when we have a Quantum machine but for now two pieces of data can't be in the same place and therefore we have to virtualize resources somehow to get around this problem okay and you really don't want to have uh different threads having access to each other's memory unless they're intending to because otherwise you can get malicious uh modification of the state of a process so um does UCB have uh Quantum Computing work yes we have some that's going on in fact we have a brand new big grant that just started um if you're interested we can talk at some point soon um so uh if you remember at the very first uh lecture pretty much or or first or second lecture we talked about some fundamental OS Concepts this was lecture two and one of them was the idea of a thread which was the execution context or a Serial chain of execution with a program counter and some registers and uh stack Etc um the other piece of that that was very important was the address space either with or without translation which as you recall was a set of memory addresses accessible um to the program for reading and writing and it may be distinct from the physical underlying memory space of dramm okay and that's when some translation comes into play and um that's when we start getting virtual address spaces which we're going to dive into today um we talked about processes which is what you get when you combine a set of threads and a protected address space and um and then we talked about dual mode operation which was basically how to take and protect the address spaces that the operating system system is producing and make sure that processes can't just randomly alter their address spaces and thereby violate the protection okay so address space and dual mode operation are things that we haven't really talked much about uh since the very beginning of the class and so it's time to bring them back all right now if you remember the basics of the address space it was the set of addresses that was accessible to a given uh thread or process and I just want to toss a couple of things out that you all know but we make sure we're all on the same page so if the address of a CPU is K bits then there are two to the K things that we can access right so if there were only eight bits there would be two to the e or 256 bytes in the address space now we've moved far beyond that unless you're dealing with little tiny iot devices but um the one thing that is pretty standard now is things when we're talking about number of things we're usually talking about bites unless we say otherwise okay and a bite is eight bits now in the early days before people really figured out much about computers everybody was kind of doing something different um it turned out that the things that people counted were um all sorts of different lengths so there was actually six bits was a standard because with two to the six or 64 things you can get pretty much um all of the printable characters uh subset of asy and so that was there were six bit things and then of course 36 bits uh might be a might be a number right but today people would stare at you a little strangely if you talked about a 36-bit word under most circumstances everything is kind of multiples of eight powers of two so two to the 10 okay what's two to the 10 bytes well that's going to be um a kilobyte right or 1024 bytes and um no notice that when we're talking about memory or sizes of things when we talk about a kilobyte a KB we're going to be talking about a24 not a thousand okay now I know in 61a and other early classes you guys all talked about Kibby bites k i b um uh the real world doesn't often deal with kib uh that's something that that you uh see when you're lucky but uh you may need to interpret uh units when you're out in the real world and usually when you're talking about memory and you're talking about kilobytes it's 1,024 all right and we'll try to make that clear in an exam but if you're out in the real world and somebody talks about kilobytes of storage you know they're talking about 1024 we can do things like how many bits are uh to address each bite of a 4 kilobyte page well 4 kilobytes is 4 * 1 kilobyte which is 4 * 2 the 10th which is 2 the 12th which is 12 bit B total all right so you're all going to be great at doing this kind of log base 2 sort of stuff by the end of the term but you should definitely get more and more comfortable with it like for instance uh 12 bits here is how many nibbles does anybody know so a nibble is a single hex digit or four bits so you know how many nibbles are we talking about here three good okay so three hex digits gives us a 4 kilobyte address okay how much memory can be addressed with 20 bits or 32 bits or 64 bits well two to the K okay use your calculator app um and of course two to two to the 32 is very common one for us these days which is you know a little more than four billion so there are some numbers that are very useful to get to know okay so back to address spaces so 2 to the 32 is is about a billion bytes on a 32-bit machine and we typically look at that as starting from ox8 zeros to ox8 FS FS F I'm losing it eight FS okay and that's uh going to give you 32 bits of address space and those 32 bits will specify a specific bite within that four billion bytes okay so how many 32-bit numbers fit in this address space why is this upside down well I'm just trying to keep you guys uh uh keep you guys on your toes we'll go forward or backwards or upside down every now and then um I apologize that it's not always consistent but um how many 32-bit numbers fit into this address space well that's a question that uh you might ask yourself sometimes because a 32-bit integer is how many bytes well it's four bytes it's 32 bits okay so there are 2 to the 32 over four or about a billion 32bit integers in this address space okay um what happens when a processor reads or writes to an address well this is an interesting question you probably haven't thought this through but now you're probably sophisticated enough to know the answer when you when the processor reads from a particular bite in the address space it probably uh some of it acts like real memory so if you read from it you get data when you write to it you modify the data when you read back you get the data you modified however a lot of other things can happen like you could get an IO operation we we'll talk about memory mapped IO later in the term where just the act of reading or writing from some address causes data to go in and out of a of an IO device okay that's memory mapped IO or maybe it causes a segf where you try to access something like if you try to access something in the middle here between the stack and the Heap it's possible you'll get a um you'll get a page fall under some circumstances es okay or it could be shared memory which we'll talk about later in the term in which case not let not too much later in a lecture or so in which case uh you might actually be communicating between two processes by setting up shared memory between them okay so address space is the set of things you can access uh and what happens when you do varies all over the place depending on how you've set that address space up and so that's part of what we're starting today is understanding how to set the address space up so uh this is the typical structure of a an Unix style address space where low the lower addresses typically have code in them and then there's a stack segment that's at high memory and grows down uh which in this case is up because we're inverted and the Heap grows toward the FFF addresses okay and there's a big space in the middle okay and the program counter or the IPC depending on what processor you're looking at points to the code segment and the stack pointer points to the stack segment so these are two register the PC and the stack pointer which we always deal with okay um and this idea that there's always a big hole in the middle is something you ought to keep in mind because we're going to talk about that shortly um when we start talking about how to do virtual memory here there's all we're going to want a virtual memory structure that lets us have holes like this in fact we're going to want a lot of holes in our space this is just the most common one that you learn about really early in this class okay and one other thing that you're going to get to learn about is the SBR uh sbrk system call is the one that's used to add more physical memory in the Heap and what happens is you're sort of growing this yellow segment uh to be larger by putting physical memory in there and mapping it okay and so SBR is a system called that uh I believe we have you implement in uh one of the pro in one of the projects okay it's either two or three so any questions okay good why do we want the holes well the answer is that um even today for um so there two questions here so even today uh four uh billion bytes is a lot for most processes okay unless you're doing some sort of high performance Computing and so you never want all of that space But the processor if it's a 32-bit processor has a whole addressable space and we know about the stack growing uh from high memory down and the Heap growing from low memory up which tells us right off the bat that we if we don't want to fill all of this with physical memory there's going to be holes in there just because the processor can address far more than we want to actually have real memory for so that's one of the good reasons for holes now and the question about sbre is yes indeed so sbre is called inside of Malik Malik is a user Library uh at user in user code and it calls sreak when it needs more memory to put on its uh free lists okay so the other thing to recall is we talked about the notion of single and multi-threaded processes so you've seen this particular figure over and over this term but if you recall the address space is the protection environment so that's like around the whole box and then there's one or more threads inside each of the threads has a stack and a place to store its registers right that's the TCB um and then there's common uh code data uh file descriptors all those other common uh things that everybody in the same process shares um on the left we have a single threaded process on the right we have um multi-threaded process and we we've spent a lot of time over the last several weeks talking about how to make threads work now we're going to talk about how to make this address space work and so you can think of threads as the active component the thing that computes and the address space is is the protected component of a process it's the thing that is uh preventing threads from different processes from interfering with each other okay so what are some important aspects of this memory multip Lexing we're thinking and the reason I put multiplexing down is there's a single chunk of dam that's typically shared amongst a whole bunch of processes and so the question is what are some important things to think about so one is obviously protection so we want to prevent access uh to private memory from other processes different pages of memory can be given special behavior um so that if you try to write a readon segment then you're going to get a page fold and uh a good example of read only is the code segment because the code once it's loaded in shouldn't be modifiable in many cases and if you can make it so it's read only then different processes can actually share the same code without interfering with each other okay um it's also the case that sometimes we might want to have memory that's invisible to user programs but available to the kernel and we could do that with mapping okay um the kernel data is uh protected from user programs programs are protected from themselves Etc so protection is a big aspect of this multiplexing um the other that's interesting is translation and we're going to work our way into why translation's important but this is basically the ability to take the processor's accesses from one address space which is the virtual address space it sees and translated into the physical address space which is where the the actual bits get stored in the dam and when there is translation not all processes have it uh processors excuse me have have it then um the processor is going to be using virtual addresses and uh the physical memory will be in physical addresses and so we're going to be translating somehow between the two and some side effects of this is uh you can use translation to help protect and avoid overlap between uh different processes so each of them can think they have the the address zero when in fact they'll be pointing at different physical places in Damm okay um the other thing is sort of a controlling of overlap so if we have uh multiple processes that are all running together at the same time we want to make sure that they don't actually Collide in physical memory first of all because that's going to screw up the state but second of all that's where an important part of protecting and only allowing communication between the parts of uh different processes address spaces that we choose and of course the default thing we told you at the very beginning of the class which uh was the by and large the most uh common thing is that there's no overlap so neither no process can uh write to another processes address space or even read from it but we're going to start allowing controlled overlap which is when they can share with each other okay now an alternate view that might be useful for some of you is to really think about interposition instead so the OS is interposing on the processes attempt to do IO operations for instance why because you have to go to um you have to do a system call to do IO and as a result the OS takes all of the processes attempted IO and Inter intercepts them and decides whether to allow them okay uh the OS interposes on processes CPU usage okay well an interrupt will allow you the scheduler to take back the CPU and schedule somebody else right we've been talking about that so really our question today is about how the OS can interpose on processes memory accesses and thereby come up with a uniform protection scheme and the obvious thing to think about here is it's not practical for the OS code to take over on every read and write of the processor because that would just be way too slow okay and that's where our memory translation accesses are or memory translation mechanisms are going to come into play where we're actually going to provide Hardware support to help us so that the OS can do its interposition and come up with a good protection model but it doesn't actually have to look at every read and write now there's a question here in the in the um chat about is it possible for a process to demand more virtual memory than we have space for and the answer is yes and uh then basically what happens is the process ends up getting killed with a segmentation fault of some sort so um if you ever try to exceed the amount of physical memory then the OS clearly knows that because it's managing the physical memory and it would uh end up killing off that process okay so really um we're talking about uh interposing a protection model on a process running in a processor and if you remember we talked about loading at the very beginning where uh a program is in storage and what we do is we pull it into memory okay and when we pull it into memory at that point what we're going to do is we're going to make it ready to run so what's stored on dis isn't always quite ready to run there's a loading process and what I want to tell you briefly is a couple of ways in which that loading actually can reflect uh a translation and protection model in software okay and so I want to remind you of a few things from 61c so here's the um processes view of memory here where we have some um uh we have some labels uh we have some uh storage that we're putting aside side so this is saving 32 words uh we have some loading uh from that data so this is sort of load uh from uh data one using and put it into R1 right um and we can do some jumps and links and so on if you actually look this is assembly code at the left but when we load it into memory for execution it's got to be put into binary okay and so this compiling that and linking that you've been getting uh good at is really the process of turning this compiler output so the compiler is compiling C it produces assembly and then it gets linked into physical addresses well for instance if you notice here the start part of the loop is actually uh I mean this this load that's at at start is basically referencing data that's data one which is a particular address in memory that address here is actually at address uh Ox 300 in HEX and that's where that data is stored this load instruction is loading from that address how does that work well during the linking process we put the address 300 hex into that instruction and it turns out that um you don't need the lower two bits in the actual instruction itself so 00 c0 is really uh the same as 0300 okay because it's 0 c0 Time 4 gives you0 300 and so the Linker is the part that figures out how to take all of these references to addresses and turn this into a binary that actually runs on the CPU okay and that binary has actually been placed in a particular place so what I said here is oh the data one is at address zx300 why is that well because we're putting it physical address zx300 okay and so really this program in in uh assembly is kind of in a location independent form still and the linking that we do makes it runnable in a particular part of memory where the address of the data is at 0300 hex um the start is at 900 hex and we've done this linking so that that instruction that load instruction knows how to get 0300 hex okay question so far so this is all 61c kind of quick uh summary here now what's kind of interesting is um what if call this app X so what if we want to run it again so we have two instances of the same application running but we want to put it in a different part of memory well one thing we could do is we need some sort of address translation so that we can put it down here at a part in memory uh and it will still run okay now if we don't have actual translation in Hardware what we end up having to do is for instance translating and linking this at a different address so notice what I did here is I put this data at o x1300 hex okay and I've altered all of the offsets and all of the instructions to point so they're consistent and if I do that then I can load two copies of this thing they're both been translated slightly differently and linked slightly differently but now they can run in the same physical memory without needing any hardware translation okay so I can I can link this in two different ways and now when the processor happens to be running in this green part of the code everything works because it's self-consistent and it's been linked self-consistently when it's running in the yellow part it's all self-consistent and it can run self-consistently so this is the uh compilation and linking process where I'm linking to different physical parts of memory and I want to pause just for a second there to let that catch up with everybody notice that um trying to run the same app at two different places in memory causes a different binary to be loaded in this environment that we're coming up with okay and so I'm hoping that um people are starting to say well that seems inconvenient right because it means that you're dynamically linking things at load time differently depending on where they are in memory and that also means that you can't just move these around in memory okay is everybody with me on that so far okay I'm hoping everybody kind of remembers this idea of Assembly Language being uh linked for a particular load point in the address okay so there's many possible translations and in fact every different place I could put this in memory I have to translate the physical uh machine code differently to make sure that all of my addresses that I use inside that application are consistent so where do we do this translation well we could do this at compile time we could do it at link or load time uh or execution time with the right Hardware support so right now so far I'm showing you at uh link or load time when we load this in to the to the process okay um so here's kind of something that you've all been doing but haven't really thought about it so you start with your Source program and you run the compiler which produces some assembly which then assembles it into an object module so if you look back here the object module is kind of the compiled or assembled version of this assembly before I've actually done my absolute linking okay and then I can take a bunch of those. O modules um with some other ones maybe for libraries I can link them and now I have a load module okay and that's kind of what we've done here this is now a loadable module and then um the loader can have the system Library involved okay and I can load all that together and I can statically link libc if I want or I can dynamically link libc and what that really means is um that there's actually libraries that are pre-loaded already running on the processor and only when I start running do those addresses get L linked okay and so addresses can be bound to final VAR um values pretty much anywhere along this path so what I was showing you back here was we were binding the final addresses at the point at which we were loading so that was a linking and loading combination process another thing that would be done if we did it statically like this is we might Link in the libraries at that time for for one last little uh bit of linking before it starts running okay but that's not what you typically do because typically what happens is you get all the way to the loader the loader loads it in and then the dynamic libraries are actually linked at the time that things are running so there's a little bit of code that's put in there instead of uh your called the the lib C routine you're thinking about kind of a stub and as soon as that starts running um then we jump into the dynamic Linker and link to a version of libc that's actually running already on the machine and that's how uh we can actually have a whole bunch of dynamically linked libraries that are read only from a code standpoint and basically shared by all of the running tasks on the system and thereby we uh have a lot less memory space that's taken up by Dynamic libraries are essentially shared across all the programs okay so dlls are these dynamically linked libraries and they're linked at the time that the program starts running so let's talk about uni uh uni programming which is back in the old old days when um you could sort of have one thing going on at once okay and uh here uni programming has no translation or protection in Hardware at all the applications always run at the same place in physical memory um and there's only one application running at a time but when you when you take your compilation chain and you link something and you make it ready to load you have to come up with absolute uh values for all of the offsets inside there just like we did back here where we hardcoded at load time exactly all of the addresses and that was only good for a particular load point in memory okay so this is uh our oper uh this is basically what we did back in the old old days before uh we had more powerful machines now this is actually a bit before my time but um and by the way for you guys I flopped this again so we now have all the uh High addresses are at the top and the low ones are at the bottom so we'll try to do that a little more consistently um but the application that's running gets the illusion of having a dedicated machine because it's got a dedicated machine okay so this is not terribly interesting from an addressed translation standpoint so let's quickly move on from this and see what else we could do well if we wanted to take that idea and multi-program it which is kind of what we've been talking about uh so far in this term we could do this without translation or protection by making sure that we never overlap different applications accidentally and we have to link them for exactly where they belong in memory and by the way that's like Microsoft Windows 31 or the original Mac OS MacIntosh and so what we show here is that application one is running at one place and application two is running at another the operate operating systems uh running up in high memory and the loader and Linker combination basically adjusts the application for a particular part of memory okay and the translation's done at at load time um and very common in the early days okay kind of until about Windows 95 or whatever in the Microsoft side when they started doing something more powerful all right now um there's really no protection in this okay so it's quite possible that application one or two could reach out and start overwriting operating system and crash the whole system now this was considered a a feature because you could get all of these um uh various drivers and other modifications the operating system you could download them from uh all over the the network and you know enhance your operating system to do good things uh this was a an early time where uh people were much more naive about uh the dangers of doing that sort of thing and there were as many people out there trying to screw you up but we've uh clearly moved Beyond this primitive multiprocessing to something else okay now the question here is does this in this environment are all jumps relative no they don't have to be relative because when we link like we did back here we're actually coming up with an absolute set of binary code that's uh been configured to be exactly good to run at this particular place so jumps don't have to be relative jumps can be absolute because we've actually modified things at load time to run in this particular part of memory does that make sense okay so now this is not to say that we wouldn't like to have a lot of relative uh jumps because that would be far fewer things that have to actually get changed on the way into the system um but let's start adding some protection so can we protect programs from each other without translation of course so we did we talked about uh base and bound way back when and by the way the idea of a base and bound protection came from the Cray one uh way back this by the way is the cray one uh it's one of my favorite uh device one of my favorite machines here because it had this circular um configuration with seats around the outside I like to think of this as the love seat configuration um and uh it was circular because it was cooled and every wire was carefully measured to make it as fast as possible okay and um this is uh back when when engineering was uh came down to actually measuring wires and everything okay um and notice that what we've done with Bas and bound now is we're protecting to say that well if application two is running it can't exceed the base and Bounds and therefore it wouldn't be able to write into the operating system it wouldn't be able to write into application one okay and we already talked about this if you remember this slide from one of our early lectures and the idea here is literally that the program is busy running in this yellow segment right here's our original program which we thought of as uh going from zero to whatever some limit one you know 01 once we load it into memory and we've linked it for that particular part of memory then the base and Bounds kind of just prevent the program from getting outside of it um the actual uh program CPU is basically just running instructions and it might have an address like one z one0 z z and what happens is that address before we allow it to go to dam is just care compareed is it um you know is it greater than or equal to the base and is it less than the bound and if so we let it go forward otherwise we fault it okay so this particular instance is now a feature of the hardware the base and bound because as the addresses are coming in from the processor we are actually checking these in Hardware before we allow them to happen so um it certainly it's requires Hardware support but this Hardware support is very simple and uh but the OS has to set the base inbound registers in order to make this work okay now um it requires a relocating loader to work we talked about that already but you have to be able to take your program and relocate it so it's runnable starting at one z z Etc all right and notice there's no actual addition on the address path so this is still fast we're just checking kind of off the edges to see whether we're should we should allow that access to go forward or not now we talked about this this is kind of fine and dandy but it's still requiring this relocating loader and uh wouldn't it be nice if instead we could just come up with one linked version of the original program that could run no matter where it was in memory okay but to do that we need to start doing our relocation in Hardware rather than in the loader so up till now we've been talking about doing this relocation um and final linking kind of in the compiler but now let's see if we can do this with translation okay and so in general the idea of translation which we've also brought up a couple of times this term is that you'd have a CPU that's busy using virtual addresses and those addresses go into something like a memory management unit and translate from the virtual addresses the CPU is using to physical addresses okay and so now there's suddenly there's two views of memory there's the view from the CPU or what the program sees we call that virtual memory and then there's the view from the memory side which is the physical memory so if we were to ask ourselves where every bit is actually stored well it's stored in the Damm somewhere and there's a physical address but that physical address for that uh bit is physical and it's different from what the CPU uses okay and there is a translation box in the middle and that translation box is kind of the topic for the rest of the lecture we're going to talk about what's in the translation box and as you might imagine there is as the CPU produces virtual addresses and we translate them into physical addresses something is in the middle here that takes a little bit of time because it's Hardware uh and so the virtual address goes in we take a you know some number of nanc whatever it is what comes out is a physical address okay um now is uh the question is is there only one translation per system no once we have a general translator we can translate any way we want in fact we're going to talk a lot about what those translations look like okay and so you can map addresses any way you want once you got the flexibility and so that's the important question here um notice by the way I also show this untranslated reads or wres so typically you can go around the mmu but only if you're in uh in user is um system mode excuse me okay so once we have translation it's now it's much easier to implement protection okay so clearly if task a can't even gain access to task B's data there's no way for a to adversely affect B okay now the mmu the question question is does the mmu Traverse the page table faster uh than you could other I'm not sure if I understand the question uh entirely here but I will say the following the mmu is doing this translation and we're very it's very important that that translation in general be a lot fast very fast okay it's got to be faster than the cash hopefully otherwise what's the point we'll get into speed a little bit later right um once you have things like page tables and so on then the mmu is going to occasionally be slow and that's when caching is going to come into play all right so the nice thing about so we'll get to speed later all right so you're GNA have to just hold off on worrying about speed for a moment let's assume it's uh infinitely fast and then we'll we'll come back from that later so once we've got translation though every program can be linked or loaded to the same re uh region of user space and so every program or every process pretend that it's got the address zero and it's got the address 50,000 because every time we have a different process that we give the CPU to we change the translation and so now we can give every process the illusion that it's got address zero which means that all of our uh linking and loading can now be linked and loaded once no matter where the thing's going to run physically because the virtual address space always looks the same no matter where it's loaded physically okay so that's a huge Advantage here now this was the simple thing that we started with where we said look rather than just checking we're going to have base and bound but now what we're going to do is we're actually going to translate addresses on the Fly by taking the program addresses coming out of the processor and adding a base to them okay so notice that if I have program address z z one z z what I do is I add the base of one z00 Z to it with an Adder and what comes out is a higher address because I'm adding a base to it and that's the physical address so now what we've got here in blue is now physical addresses okay and and what's coming out of the pro the processor are virtual addresses and the the way those are related is with a very simple addition operation okay and if you notice uh the base and bound um the base is related to this translation the bound typically we check uh the addresses coming out of the CPU and so we make sure they're not too big and then we add the base to them and then we let things go forward okay the good thing about this simple mechanism is it's very simple we just have an adder and the original program can now as I mentioned can all be linked no matter where this yellow piece ends up it always looks the same because we do this translation all right questions all right now this is hardware relocation why because we have Hardware that little plus that's relocating for us and now we can still ask a question can the program touch the OS well no because we don't let the program addresses go below zero so they can't get up here and we don't let them go above the bound and so basically what we we've got a little um sandbox here that the yellow code is forced to be in and so it can't Can't Screw Up the OS okay can the base be negative no okay so the base uh the base these are unsigned uh operations here okay um Can it touch other programs well no because no matter whether the program is in memory above or below the yellow thing we still protect it by checking uh for not going below zero and keeping uh less than the bound all right okay so this is pretty simplistic and you might imagine that um if this were all it is there wouldn't be a whole lecture on this so clearly something more complicated it's going to be needed and let's start looking a little bit at these ideas here so one of the problems with a simple Basin bound is the following so here I'm showing you a chunk of of memory we have the OS okay and we have a couple of processes six five and two that are running and over time what happens is well process two finished and so that left a big hole and then process n showed up and then process 10 showed up but process 5 left and now process 11 comes along and we can't find enough space because even though process 11 would fit in the sum of the blank empty areas there is no area that's big enough for us to fit and if you think about this simple base and bound that I've got here this requires the memory to be contiguous physically so we have to actually find a chunk of dam that's big enough to hand all of our data okay and that just asking for trouble because suddenly we've got a fragmentation issue and really why is there a fragmentation issue here it's because not every process is the same size okay and as a result of the different sizes suddenly we get fragmentation and the only fix to this is going to be well we would have to copy processes nine and 10 and push them up in memory and then we could make space for 11 or whatever um but there's going to be a lot of memory copying going on which is expensive and that's just to try to um coales things together and get out of the fragmentation issue okay so we're missing support for uh sparse address spaces here also uh if you think about what I've just told you here is look at process 11 process 11's chunk has to be contiguous and so if you remember just a few slides ago I said oh we want to be able to have a Hole uh between the Heap and the stack because that's part of the way we use this well we can't get a hole between the Heap and the stack given this Bas and bound idea because the memory is contiguous here okay so that already tells us that maybe we need something different okay the other thing is it's hard slash very hard to do any sharing in memory between two processes because by definition process five for instance isn't allowed to access any memory outside of its chunk including the OS so the only way that five could communicate with six is maybe you could set up a pipe where you had to do a system call into the OS and then that would call back into process six so pretty much we forced ourselves to have to do inprocess communication entirely by going through system calls uh because of the way we've set up this memory sharing okay so one thing we could do which is done is rather than one segment per process we could have multiple of them and you're already very familiar with this you got to play with GDB on your very first uh homework zero right uh and what you saw was there's a bunch of different segments that represent things like stack and program and so on and what we can do is we can have each segment be a contiguous chunk of memory so the user's view is that there are a bunch of of individual segments that are kind of floating in space the physical view is really that um you know they're contiguous chunks in memory okay and once we do this then we can start talking about well this green segment 4 we'll just map uh that same segment four into two different processes and now suddenly they're sharing memory and they can communicate with each other okay so the mere Act of having more than one chunk of memory suddenly gives us much more flexibility okay now let's talk a little bit about what's in the mmu to give us multiple segments we already talked about base and bound as being single registers but now we need to have multip Bas and Bounds or have I have it called Basin limit here so here I'm showing eight of them they might be loaded into the processor just like they were with the single base inbound and so the segment map is in the processor and now what do we do well we can take the virtual address and have some uh chunks of uh some segment bits at the top of the address which we split off how many bits are we going to need up here in order to have our eight segments if we do it this way anybody figure that out three very good why because two to the three is eight right so the top three bits we use to pick the uh segment the rest of the bits will'll call the offset okay and so the offset will get added to the base and that'll give us a physical address which will then check the limit to make sure we're not too big that'll give us an error if we've gone too big and now suddenly we've got a multi- segment basein bound and this is a little more flexible right because by having base two limit to the same for different processes now suddenly we have a chunk of memory that can be shared okay and we have as many chunks of physical memory as their ENT entries here and I don't know if you notice um but uh I've also got these valid or not valid bits and so some of these segments might be uh intentionally not set up as valid okay now how does this relate to segment registers in the x86 this is very simil SAR with one slight difference which is notice in this particular model we grabbed the top three bits of the address and Ed that to tell us which segment register whereas in something like the uh x86 model those bits actually come out of part of the uh instruction um the instruction encoding and you take a few bits off and that tells you which segment register you're dealing with rather than having to pull them out of the segment uh some bits out of the virtual address AB absolutely the same idea all right so um and and so if you were to look at what's going on inside of an x86 processor you know this is using the um the es segment it basically decides which segment it is based on the encoding and then it uses that to look up in the segment table okay so what's the vrn this is whether it's valid or not valid so typically when we do an access we're going to not only look up the base and limit and check the offset giving us an error but we'll also check the valid bit potentially giving us a different error if we try to access a not valid segment okay so suddenly we're getting into an interesting model here which isn't quite where we want to be but it's starting to show you all the major uh interesting aspects of a translation scheme where there are certain requirements on the addresses namely um we we can only talk about segments that are currently valid there's also uh certain constraints on the offset in this case where the offsets can't be too big um Etc and we're starting to look like I said at certain uh access requirements of valid or not valid we'll get more sophisticated here in a moment okay questions now uh what you should how large are segments well segments in this case suppose this is a 32-bit address and uh we take three bits off the top what's the biggest segment that we could have how do we do that yep two to the 29th exactly so this particular scheme could have a really large segment right um and so the the size of the segment is really the maximum size of the segment has to do with the maximum size of the limit in this case okay okay good now um here are the x86 model the the basic original uh 8386 introduced um protection but also had these segments and so these segments there's a six of them that you're well familiar with the code segment stack segment and then four other segments are typically used and um this is a typical segment register just like the green ones from the previous page and it's not not quite the same as what it was but it's close so this index in the segment register actually points into a table that then looks up the uh the set of segment registers that you have access to okay so this is just one little level of indirection but it's pretty close and so what's in CS for instance is these 16 bits the index is used to look up in another table to get the base and limit and then there's a couple other things so for instance the uh the current RPL level okay is the uh what level you're executing at are you executing at kernel level or user level for instance remember there's two bits here because there's four levels um segmentation is fundamental in the x86 Way of the World okay and so you can't just turn segmentation off it's in every it's in every axis okay so you know here if you were to look at every instruction there is some segment portion of that instruction um that's there okay and it may be implicit or it may be explicit but there's always a segment portion of the AIS when you're dealing with x86 um what if you just want to use paging or some other flat scheme we'll talk about that in a moment but typically what you do is you just set the base to zero and the bound to all of memory and now you you've effectively said I'm not going to worry about my uh segments anymore because I'm effectively treating them as pointing at all of memory okay all right and uh By the way when you get into the 64-bit version of the x86 scheme all but the uh the top um two segments fs and GS all of the other segments basically are uh effectively have a base of zero and a limit of of 2 to the 64th so they're essentially nonfunctional we can talk more about that a little bit later okay so I want to give a very simple example here again just to walk us through so if you look here's four segments okay so 1 2 3 4 that means two bits taken out of the address out of a 16bit address so this is a really small address space Here's the virtual address space Here's the physical address space and I want you to notice that I've divided this into things that started 0000 Z things that start at four Z8 z z and c0000 z and if you think that through this address if you strip the top two bits off of 0x z0000 Z what you think of is there's the top two bits are all zeros here the top two bits are Zer one here the top two bits are one zero and here the top two bits are one one now if what I'm saying is a mystery to you it's time to start reviewing your hex so you should get to where you know uh zero through F very cleanly and you know exactly what the four bits are so that you can strip that off easily but we'll assume for a moment you know that so what that means is segment with the ID zero where the top two bits are zeros I look up here and I say oh segment id0 has a base of four and a limit of eight so that means that this little pink chunk here gets mapped to this little pink Chunk in physical Space by this scheme similarly this chunk of cyan I should call this magenta I suppose this little chunk of cyan gets mapped to this chunk how do I know that well because 4,000 is a 01 in the upper two bits which uh segment id01 has a base of 48 Z and a limit of 1 14 which means that it goes from 4800 to 5 c00 okay and similar Etc okay and we can start talking about oh yeah this yellow chunk is something we share with different apps or whatever there's lots of ways to decide how to put controlled overlap into your use of physical space once you have the ability to do this mapping okay so mapping is pretty powerful and the other thing to keep in mind is this green table which is in the processor needs to get swapped out every time I change from one process to another because I'm changing the address translation so when I change from one process to another I save out the green table on One processor and I load in the green table on another okay all right now I want to give you another example of translation okay so here is some assembly and here's some virtual addresses okay so blue here is virtual because the processor is going to be running there here is my uh segment registers all right and we are going to pretend to be processors I don't do this too often because it's timec consuming in class but let's simulate a bit of this code to see what happens so let's start the program counter at 0x2 40 and notice that's a virtual address because this is what's in the actual program counter of the project processor okay so the program counter has a 0x2 40 and the question is if it wants to load the next instruction for that program counter what happens well uh it says it's going to fetch 0x 024 4 Z what happens in the mmu is it takes this address which is 16 bits notice and by the way this is 02 40 so 0 is 0 two is 1 0 4 is 0 1 z z and zero is z0000 z okay uh again maybe you want to write out hex to Binary uh and put it under your pillow and and sleep with it till you till it absorbs into your brain if this is not something you're comfortable with but once I translate the address into bits I can look and I say oh look the top two bits are zero which means I'm in the code segment and so that means I'm in Virtual segment zero z0 What's the offset well the offset is pretty much everything else well if I take the top two bits out of there what's left over is still 240 and so what I do is I take my base which is 0400 Z and I add it to the offset of 2 40 and what do I get 424 Z voila so the physical address has been translated from this virtual uh instruction fetch to 4240 and at that point I fetch from Dam at 42 4 Z and I get that instruction which is an L load um address um into dollar a0 from varx so now I've got the instruction loaded okay all right and what I want to do is I want to load this varx uh address which is 0450 into address a z Now notice 450 is a virtual address and I'm loading it into register a0 question do I translate the 4050 into a physical address before I load it into register a z can anybody tell me whether I do that or not good no why everybody who said no is correct okay that's right good the process only sees virtual addresses so this is a virtual address 450 and it gets loaded into a z great answer so I don't translate because I'm not going to drram here I'm just loading a constant address into a z okay now next instruction we're going to fetch well we were at 24 uh Z now we're at 244 because we incremented the PC by four okay because uh in this risk and processor the addresses are 30 or the instructions are 32 bits in size um we translate the 244 into 4244 which is exactly what we just did in the previous step but uh the next instruction we bring the the jump and link to string length Okay and um we're GNA jump to where the string length is and once again we're going to move the uh 248 which is the return after jumping link into the return address okay so the typical uh risk processors like Risk five that you guys dealt with in 61c there's a return address and that return address is going to be 248 which is once again a virtual address and since we're D we're jumping to uh virtual address 360 we put that into the program counter now I want everybody to appreciate the fact that the only time we translate is when we go to dram which so far the only time we've done that is when we load the instructions we have to figure out where those instructions are stored in Damm okay now we get to this string length Okay where we're g to um load uh a value into V 0er of zero so we we translate uh the physical address here is 4360 we get it okay we're going to move um the uh constant zero into v0 okay that's what that instruction do does and we're going to increment the PC four and last thing we're going to show you here is we're going to fetch this instruction okay so so far the only time we've done any virtual translations to physical is when we load the instruction but now look at this this instruction is a load bite so not only do we load the uh instruction which we fetch from 4364 okay but now we want to load the uh the value that's at address stored in dollar a0 so we have to take that dollar a0 which is 450 and translate it well 450 looks like this in HEX I mean in binary 0 1 0 0 Z 0 1 0 one0 Z this is virtual segment one because the top two bits are one got the data um that tells us the base is 48 Z therefore the physical address is 4850 okay all right and then as a result we load a bite from 485 Z into t0 Etc and we're good to go okay all right now if you notice um we actually did do a virtual translation right we figured out top two bits bottom bits are this 50 when we add the the uh base plus the 50 we get 4850 so um this is showing you the translation going to dram both when we're loading instructions and when we're doing data accesses all right I realized that was a long uh process here but I just wanted to talk through that once in class so that everybody had seen it once okay do we have any questions on this before I move on okay now does the OS have special instructions that access physical ACC uh addresses directly yes in MO most cases there's a way to go around the mmu okay the other thing that the OS has access to is this green set of registers so only the ought to be able to modify this which means only when you're in system mode not in user mode are you allowed to change the green registers okay all right so what are some observations about what we just did there so We're translating on every instruction fetch loader store okay that's fine the virtual address space has holes in it we that's good right if we look here we've got some holes in the virtual address space okay that may be getting a part of our where we wanted to be um when is it okay to address outside a valid range well this is how the stack grows okay if we look at our stack back here and I'm going to go back to this previous figure okay we might have stack is is uh base zero um limit 30 uh 3,000 I mean so that's this green Chunk we could figure out that if the process tries to go outside of that it's effectively wants to grow the stack because it gets a page fault or a segmentation fault in this case by trying to access an illegal address that's outside of the limit the OS could take that as an indication that that needs to put more physical memory in there and it could grow the segment in that case okay but you can see that there's some limits to what you can do because you can't run into another segment all right the other thing is clearly we need protection mode in the segment table for example code segment might be read only data in stack might be read right Etc so we want to start putting protection bits on the different segments what do you have to save and restore on a contact switch well this particular scheme that we came up with the segment table stored in the CPU not in memory because it's small and therefore we need to uh every time we switch from one process to another we need to uh store the U green segment table out to memory and pull in the green segment table from memory for the next process and if we want to put a whole process on dis we have to swap it all out we'll talk about that in a section okay all right now what if not all the segments fit in memory so if we have if I take the set of all processes that I want to run and they need more physical memory than fits one option is you just kill them off if they don't fit a less drastic option is in fact to do swapping okay this is an extreme form of contact switch where you take whole segments send them out to disk so that other processes can use the physical memory okay now of course the cost of contact switching excuse me is a lot worse in that case okay because you got to go to disk remember what's the number I told you guys a dis is like a losing a million instructions worth of time okay and notice that because of the way we set up our segments this is extremely inconvenient they always have to be kept together as a whole so if you look at that green chunk of memory it doesn't matter how big it is the whole thing has to be swapped out to dis um we don't have any option in putting a slight part part of the segment out there okay so um you might imagine that we need something better here because this is not quite what we need so far so desirable alternative to swapping everything might be some way to keep only the active portions of segments in memory at any given time and swap out the ones that are idle and that needs something better than this whole segment at a time thing that we've gotten ourselves into we need something finer granularity okay so problems with segmentation are one must fit variable siiz chunks into physical memory leading to to uh fragmentation um you have to move processes around multiple times remember just to deal with fragmentation remember I showed you that you had a set of processes some left you added some new ones and pretty soon your memory is all fragmented and your only option here is to move stuff around so that seems inconvenient you have to swap the whole thing to disk okay and so really there's multiple levels of fragmentation that are bad here and just to remind you guys of the different types of fragmentation there's external and internal fragmentation so external fragmentation says there's uh gaps between allocated chunks of memory that need to be Coes together and so we're really talking about external fragmentation here internal fragmentation says you've allocated a chunk of memory and you don't need all the memory within the chunks and it's possible that we allocate our segments big uh larger than we needed and now we've got fragmentation inside of them um but this external fragmentation is clearly a major problem with our model so far so that leads to this picture which I've shown you several times this term and the idea there is we want a smaller uh quanta of stuff right so we want to go through this translation but maybe rather than having whole segments uh worth of of chunks maybe we actually have many we divide the data into lots of little pieces which we're going to call Pages translate each one of them separately and now we have a lot more control over placement okay and so that's going to be um General address translation not just this simple base and bound segmentation that we've been talking about okay so we're going to do fix size chunks okay and this is a solution to fragmentation first and foremost So Physical memory is now going to be page size chunks and I'll tell you right off the bat a page is typically a four somewhere between 4K and 16k let's think 4K for a moment bytes okay remember which is uh you know four times 1024 right every chunk of physical memory is now equivalent and so therefore you can just use a vector of bits to handle allocations there's no longer this weird um keep track of all the free segments and their sizes and then figure out if you have to CL coales them together Etc now pretty much any chunk of memory is the same size as any other chunk of memory and so really we only need this huge bit map that tells us which ones are free and which ones are in use so that seems ADV advantageous right um should the pages be as big as our previous segments well no because that clearly left us into some problems with fragment mation so what we want is to have smaller pages okay now the original ones and original units were kind of in the 1K size you can get up to 16k we're going to think about four which is kind of in the middle here okay and so that means that what we're recalling segments like the stack segment or the code segment or whatever is really comprised of a bunch of individual pages that we're then going to put together into a virtual memory space for the processor to access okay and so our mmu memory management unit is going to do something more than just this base and bound translation it's actually going to translate from one page uh set of pages to a different set of pages from virtual to physical so how do we get simple paging okay so this is our first chunk uh first try at this right so rather than having a register set of registers inside the processor which gives us the base and bound we're going to change gears for a moment and actually have a single register called the T page table pointer and it's going to point at a chunk of memory now that's going to have a set of pages to translate okay in the page table for a moment we're going to have one page table per process and it's going to have a single uh page translation in it called a page table and that's going to be stored in memory okay and for those of you that are thinking ahead this is not quite what we want yet we're going to get to what we want next time but we're going to get closer this time okay so these this green portion now resides in physical memory not in the registers of the processor it contains physical page uh and permissions for each virtual page so if you notice page zero here is valid and read only page two is valid and can be both read and written Etc okay page four is not valid and how does our virtual address mapping work well here's our address we're going to take the top uh set of bits and that's going to be our virtual page number and the bottom set of bits are going to be the offset and this offset is going to have enough bits for our page size so we've decided on a 4 kilobyte page which means the offset is 12 bits and the virtual page number is pretty much anything else okay it's all the rest and so now that offset in our translation is really easy because all we do is we take it out of the virtual address and we copy it over to the physical address so the lower 12 bits of the physical address is exactly equal to the lower 12 bits of the virtual address okay and then the virtual page number is used as an index into this page table so if those remaining bits happen to be z0 one then that represents page one and so we take the virtual page number we look it up in the page table it says it gives us the physical page ID that's a page number that's page number one that we copy into the physical page frame number and now we've got our physical address so we take the virtual page number look it up on the page table copy the physical page number into into our physical address and we're good to go okay and if you look at this um for instance by the way I'm talking about 1K Pages here for a moment so if the offset's 10 bits then you might have uh 1,24 byte pages and so it's 10 bits that gets copied the remaining bits well if it's a 32-bit address then 32 minus 10 is 22 bits so there's 4 million entries those four million entries are used to index into the page so one of four million options uh which page it is we look it up okay and of course we got to check bound so this page table is uh only so big and so in this case there's only six entries here and so the page table size says that if this virtual page number is bigger than six we get an error and if I try to do something that the permission bits don't allow like I'm trying to write when I'm only allowing reading here then I get an access error okay now every o the OS does give uh every process gets a page table pointer that's exactly correct all right and the way we've set this up so far is every process also gets a page t uh table size as well okay so they get a pointer and a size kind of like base and bound but now this is a level of indirection on page granularity okay um and by the way by the time we get to next time we're no longer going to have a page table size because this is not going to be quite what we want okay um now the data to process wants something more than a page that's not a problem because we just use page zero and page one continuously we find Pages for it in physical memory and all of a sudden the virtual address has two pages worth of virtual address that's physically backed so there's uh the nice thing about this paging is that you can allocate physical data any way you absolutely desire uh to give you whatever set of contiguous addresses you like Okay so this isn't um this is ultimately flexible okay I hope I answered that question so let me show you a very simple page table example um this is a silly one but it gives you the idea we have four byte pages okay a four byte page means that we only have two bits offset right and the rest are the page ID and so what happens here is if we have 8bit addresses since two bits are the page ID then the top six bits are address uh are page Z which tells us that the base is four of the physical page 0000 1 0 Z that's the number four we copy z0 to uh the offset there and that tells us that this pink set of virtual addresses turns into this pink set of physical addresses and we can do the same with uh the blue and the green where here notice that the blue ones Cyan are 0000001 because it's uh you take the ox4 and you split it out so this is um page one which turns out into page physical page three and that's why this chunk of cyan maps to this Chunk in the physical space now and the green one similarly and now we might ask well where is six well the the thing about six is six if we split it into the offset which is two bits and the uh page ID we see the page ID is still one and the offset is one Z or two so we basically are in this blue region right and address six is between four and eight so that makes sense and all we do is we take uh 00000011 and that's our new physical page we copy the uh offset to the offset and we're good to go and that's over here and that's the same is true with nine okay and what's nine well nine is z0000 z one0 Z1 lower two bits are copied the uh upper uh six bits tells us which page ID we want that gets copied over and we find out that we're up here so virtual address nine turns into physical address five in this translation scheme okay questions now good question if I fragment Pages across physical memory does it matter and depends on what you mean by that but let's assume here that we're looking at this figure and let's just talk about that so notice the processor sees three pages in a row that it can use and it could have a data structure that spanned all three pages if it cared right because that's that's okay and notice how those are split all over the place okay and the answer is it doesn't matter from the translation standpoint how this works okay however um there are certain cases where the dam might be a little faster if things are next to each other but um that's going to depend a lot on your architecture by and large I would say it doesn't matter how scrambled they are in physical memory uh the processor gets to use them in virtual memory okay and um when we get more into uh performance it's going to be more about which of these pages are on disk versus in memory than how they're scrambled amongst each other okay and the other question is will it ever be the case it's only partially loaded if it's across uh multiple Pages if I have a data structure here yes so it's possible that this blue thing is going to be out on disk where the pink thing isn't and so if I start reading a data structure in memory and I get to the blue part that may cause a page fault which has to pull stuff off of dis and we'll get to that uh next time as well okay now what about sharing I want to just show you a little bit here here's an example where process a here's its page table and I'm going to map this uh page number two to some Chunk in memory and here we go process B has a different page table pointer and a different page table and it's going to have something that ends up mapping to exactly the same physical page now because we did that now process A and B can share data by writing in their shared page and each can see what the other wrote I hope you all see something weird about this though process a sees that data at one set of addresses namely where up top here at [Music] 000010 process B sees that same data at a different place which is 00 one0 0 so these two virtual addresses map the physical page to different places so you would never really want to do this probably unless you had some sort of data you were sharing that didn't mind where it was if this is a linked list you want to make sure that the mapping in the page tables is at the same place in the two virtual processes or in the two processes and we'll talk about how to do that with uh when you're setting up shared memory segments okay when we get to that now if you bear with me for just a second so where do we use page sharing all over the place so the kernel region of every process has the same page trable entries um and and the process can't access it at user level but when you go for a user to Kernel switch now the kernel can access those pages um we're going to talk about the Meltdown bug next time and um that will be an interesting issue that uh we'll have to talk about but for now the colonel uh can share the same Pages obviously between different processes um different processes running the same binary we talked about that earlier but if we have uh I don't know what your favorite editor is here but Ena running twice on the same machine all of that code is readon and it's mapped to the same shap shared set of pages between two processes and so the their code segments actually end up mapping to the exact same physical Dam and now those two processes can run away happily uh sharing the same data okay I have started a holy war by saying emac ver vi vi so now um by the way I'm a big fan of emac I I apologize to all of you out there um so user level system libraries are another great example of sharing okay we share dynamically link libraries which I mentioned earlier in the lecture um and the way that works is the the actual code is shared and it's linked into to every process um automatically okay and you can have shared memory segments between different processes usually uh shared at the same point in their virtual address space as a way to um allow you to literally talk to each other in shared memory okay and you can share linked lists and objects and everything okay now the memory layout for Linux is kind of like this okay it's a little bit different than we've been talking about so typically the kernel space is up high the top one gigabyte in a 32-bit machine and the lower 32 gigabytes are for the user uh code and although we've been talking about the stack starting at the very top in fact it St it starts at a random offset and the um things like Dynamic libraries are at a random spot and the Heap is at a random spot and the reason that this Randomness is introduced as to where the starting point is is it makes it a lot harder for an attacker that breaks in uh to your process or the kernel to find your data because you've moved it all over the place okay and notice just from this figure all of these holes in the space so um as a thought for uh from now till next time is what we've come up with doesn't work with holes very well okay because this page table is contiguous and if we have uh you know if we have four uh um if we basically have a whole bunch of pages we need and with a bunch of holes in it we need to have enough page table space to cover everything okay so this is going to be a problem and are these holes used well right now they're not used for anything in the stack these holes are going to help signal that we need to put more physical memory after we sort of uh try to go below the currently assigned stack so yes the holes can be used uh once they've been mapped okay so I should let you guys go um I'm going to uh and um you can have more than one page for the stack but you only put the minimum down there for now okay so um we'll talk about some of these other interesting questions I want to let you guys go for now but we start we talked a lot about segment mapping okay so segment registers within the processor by default um the segment ID is associated with each access maybe because it's a couple of bits in the address or because it's in the um actual instruction every segment has Bas in limit information and these segments um in some cases can be shared okay we started talking about page tables so in this case memories divided into fix size chunks of memory virtual page number is pulled out of the top of the address the lower part's the offset you just copy the offset you translate from virtual page number to physical page number and unfortunately right now we have really large page tables because of all of uh the way we've done this next time we'll have multi-level page tables where we deal with spareness much better and the page tables are much less uh overhead all right so um going let you guys go for now um and I hope you have a great night we'll see you on Wednesday |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_24_Networking_and_TCPIP_Cont_RPC_Distributed_File_Systems.txt | well welcome everybody uh to cs162 we're getting down to the very end here um and uh there's no class on wednesday just so you all know um i would like to pick up where we left off and we were talking about a number of things in extending operating systems out to the network as a whole and so we talked about the distributed consensus making idea and that was basically a situation which you have several different nodes spread throughout the network they all propose a value some nodes might crash or stop responding but eventually all the nodes decide on the same value from some set of proposed values so that's the general consensus problem there's a simpler version which is distributed decision making and that's where you choose between true and false or commit and abort or one of two options and essentially the job of all the nodes that are participating in some protocol here for consensus are basically collaborating and eventually coming up to exactly the same decision um equally important to the initial process of making consensus is making sure that it's recorded for posterity and so that's basically um you know how do you make the decision sure the decisions can't be forgotten so the simplest thing of course is recording on disks but in a global scale system you could start talking about replicating much more widely um somewhat like a blockchain application so the particular type of distributed decision making that we spent a little time on talking last time was two-phase commit and basically the key behind two-phase commit is there's a stable log on every participant to keep track of whether a commit is going to happen or not and if machines crash in the middle of the protocol and they wake up they can look at the log to see what they've committed to in the past the two phases of course are the prepare phase is the first one where the global coordinator requests that all participants make a decision to either commit uh or not and um so basically you ask each participant what they want to do they either say commit or abort and they make sure to record their decision in the log as we mentioned so that if they crash in the middle they can come up and they will never come up with a different decision than the one that they've committed to so to speak and then during the commit phase if everybody has said commit then the coordinator will tell everybody to go ahead and do the actual commit at which point they all record that the final decision was commit and they go forward and of course if any one participant decides to abort then they all abort and the crucial idea here is either it's atomic atomic decision making either everybody decides to commit or everybody decides to abort and uh there's no mixing of the two okay and so that was kind of uh the simplest example of this and we talked about some of the downsides of two phase commit among other things being that a crashed machine can prevent everybody from moving forward and so then we started talking about alternatives after that okay um let's see here so the log is basically a crucial part of that so if you go back and look at several of the slides that i had walking through the protocol you can see how the log make sure that we always have that atomicity property of everybody decides to do commit or everybody decides to do abort the second topic that we just started with uh toward the end of the lecture was we were talking about network protocols and we mentioned that there are many layers in the network protocols there's the physical level which is the ones and zeros could be optical phases it could be any number of things we talked about the link level which is packets being sent on a single link with their formats and error control for instance we talked about network level communication where you put a bunch of links together for a path we talked about transport level we just started about that which is for reliable message trans uh message delivery and we're going to spend a lot a good chunk of today figuring that out as well and so this is a rough diagram to keep in your mind here the physical and link layers down at the lower level can be uh any number of technologies like ethernet or wi-fi or lte or 5g or whatever you like and those get you one hop in the network ip typically gets you more hops okay so once you got the ip protocol then you could route from here to beijing for instance as long as you knew the right ip address things would be forwarded hop by hop through the network above that level is the transport layer where we actually start doing better than just talking about machine to machine communication we can actually start talking about process to process communication and then of course you build applications on top okay rpc stands for remote procedure call we'll show you that a little bit later in the lecture okay so um and a lot of things are built on top of remote procedure calls so we'll talk more about that so this layering uh is building complex services from simpler ones and each layer provides services needed by higher layers uh that utilize those services so this is uh something that you've known for all the time you've been in computer science at berkeley layering can be a good thing the physical link layer is typically very limited so it's one hop and not only is it one hop but it's it's uh unreliable typically there's a maximum transfer unit so somewhere between 200 and 1500 bytes are very common um it's only uh high performance networks inside of cloud uh processing that might have what are called um larger packets that might be 9 000 bytes or so but typical 1500 is the max you see routing is limited uh with a physical link uh possibly through a switch okay what we're gonna try to figure out now in the next uh bit of the lecture is if we have these limited messages that are of limited size how do we basically build something we can use so the physical reality is packets the abstraction is one of messages so we can build our decision-making algorithm so we can build distributed storage which we'll hopefully get to by the end of the lecture today um the physical reality is that packets not only are they limited in size but they're unordered so sometimes the packets might arrive in a different order than you sent them the typical abstraction is that random ordering is not good for us we'd like things to be ordered physical reality is that packets are unreliable remember when we talked about the end to end uh philosophy we said um gee the network ought to not do things that the endpoints still have to do anyway and so uh datagram networks where the packets are not guaranteed to make it to the destination are the typical thing in the middle because at the end points we have to have some reliability protocols talk a little bit about that today physical reality is that packets go from one machine to another which is only sometimes useful it's much more useful to be processed to process the reality is that packets only go on one link over the local network we'd like to route them anywhere the reality is that patrick packets are asynchronous they kind of go when they can we'd like them to be more synchronous so that we know when something is completed and then of course packets are insecure and we'd like them to be secure so the reality of the physical pieces on the left are one ones that we would like to be able to basically hide under a virtual communication abstraction giving us a much cleaner messaging abstraction okay now just i showed you this last time but i just want to pop this up really quickly ipv4 for instance basically has a header that's wrapped around data so you put this on the front of it and this 20 bytes have a bunch of fields including the source and destination address so where am i going where am i coming from and then a protocol which i have highlighted in red here is typically what type of ip packet is this and we'll show you a couple of those now my process to process the question is do i mean on different machines i certainly mean process to process on different machines is something we would like to achieve sometimes you use the ip protocol abstraction and go to from process to process on the same machine um and uh but by the you know the thing that's much more interesting here for this discussion is going from one machine to another okay and from one process on one machine to another process on another machine so now doesn't the protocol field violate abstraction somehow you might think of it that way but it turns out it gives you enough information that when an ip packet comes in you can put it and de-multiplex it to the right protocol handlers and so that's kind of a minimal call it an abstraction violation if you like but it's kind of a minimal requirement there in order to very rapidly process incoming ip packets now protocol can be tcp it could be udp it could be icmp could be any number of things so today we're going to talk about udp and tcp yes in particular how do we build process to process communication from machine to machine so looking back at this header again notice that it's 32-bit source address and destination address so the source is where i'm going let's say that's in beijing somewhere the destinations where i am that's my local machine these are two machines they don't say anything about which process is on those machines like a web browser for instance or a web server doesn't say anything about um who would like to communicate okay and um so the the simplest thing we can do is called udp which is a type of process to process communication we get by taking this ip header this is the one i showed you earlier 20 bytes and adding a udp header which basically has a source and destination ports these are 16 bit numbers a length for the data and a checksum but these two ports the source and destination ports are part of that five tuple if you remember when i said you create a socket from machine to machine remember it was source address destination address source port destination port and protocol so all five of these things that you see here together work okay uh to give you a unique connection between two processes udp is very simple okay it's another type of datagram but one that basically goes from process to process and so if you see here this is ip protocol 17 which was we put a 17 in that header it's a datagram so it's fully unreliable uh as we use it it goes from source to destination and it's really low overhead and it's really low overhead because we just put a few extra bytes eight bytes on top of the ip header to get the um the udp header okay and it's often used for high bandwidth video streams and so on and it's a very good way to sometimes overuse network bandwidth if you're not careful because there are no restrictions on how many packets you can try to force into the network and so a number of uses of udp can be considered anti-social almost if you use them incorrectly all right and we'll we'll see how tcp is is different than that so all right now here's this layering that we just talked about seeing the gray at the bottom here is the physical uh you know ones and zeros and the data link layer above is that link to link and so basically going one hop goes over the the data link physical combination here to go from say a host to a router to a destination host but that's not going to get us very far without being able to route so this actual hop the data link physical gets us some host data to the router or from the router to host b it's this network layer on top that's doing ip for instance that decides how to go hop to hop to hop using routing tables to get you from your source to your destination above that is going to be the transport layer which is going to be for instance udp or tcp and then applications on top of that and of course applications are the ones that open the sockets so the reason i've got these arrows the way i do here is you think when you're writing your application that it's communicating directly with an application at the destination in reality what's going on is your application sends uh something through a socket and it really goes through the different layers in host a it goes across the physical and data link layers to the router which goes up to the network layer the router makes a decision of what next hop to go and so on and eventually you get to the destination host and then it comes on up through the application through the various layers in the operating system at the host side and eventually into the sockets and the application so these these arrows represent communication but at an abstract level it's only the very lowest ones that represent direct connections okay and so the way we can look at uh for instance this communication is we can think well we've got an application with some data what happens is it goes through a transport layer where we wrap a transport header around it so that's like the udp uh ports for instance and then we wrap a network header on it which adds the ip address and so on and then we put a frame header which is the um the mac addresses for let's say ethernet like i said and then uh that goes down the physical layer there's some bits that are transmitted the other side and then things are unwrapped so this is like adding an envelope that then you put it inside another envelope inside another envelope it gets transmitted and then you pull it out of the envelopes and eventually get back to the other application okay um so uh this wrapping is something that is basically this layering that we're using for abstraction it can get expensive and sometimes uh really high performance routers are going to completely violate all of these layers and they'll squash everything out and process everything at once in parallel in an fpga or whatever but it's important to try to understand the process as the way i've given it to you here where it's putting a series of envelopes together and then taking the series of envelopes apart and the other thing i wanted to point out here is this from the network layer to the network layer this is machine to machine it's really this transport layer that hands off to the right process okay so that's where we demultiplex based on port and then eventually the right application gets it because we've de-multiplexed it at the transport layer once we've gotten through the network layer questions now um let's look at these transport protocols a little bit so transport protocols are things that we put on top of ip we gave you udp earlier that's protocol 17 that really means that you put a 17 in that red field i showed you earlier this is a no frills extension of best effort ip to be processed to process rather than just machine to machine like ip is uh tcp which is something um which we'll talk in more detail about in the next number of slides is more reliable okay it's so it's got connection set up and tear down you discuss you discard corrupted packets you retransmit loss packets you make sure there's flow control so you never overflow anybody's buffers there's congestion control so that if too many people are trying to use a link in the middle everybody fairly backs off and so on and so that's going to be a slightly different animal than udps and furthermore tcp is a stream which i'll show you in a moment there's a lot of examples obviously there's eight bits there in that protocol field so there's many different things other than udp and tcp there's for instance dccp which is another datagram protocol there's rdp for the reliable data protocol there's sctp which is a pretty cool multi-stream version of tcp that isn't used all that much but so there's many different things you can put in that tc in that protocol field so just to flash back to i don't know a month and a half ago we were talking about this client server example for a web server and if you remember we talked through the various setups and and so on where server gets a listen port the client connects and then there's a socket that's set up and so on and ultimately once everything's set up we somehow are able to write and read through the socket and everything just works reliably as a stream and so we're going to talk about how that works now the question here uh that's in the chat is sort of how much how many uh non-tcp and udp protocols are actually used you know they um they're used for a lot of things that you might not normally encounter like for instance if you're if you have a an encrypted vpn from point a to point b uh some of the one of those protocols is used basically for um for uh the encrypted packets and there's other versions uh port 500 that's actually a udp so that's not that's a udp packet but that's used to set things up and then it's in the encrypted beauty the encrypted ip after you're done there's a number of other protocols there that are actually used uh in ways that help manage so they're around the outside of the typical connections uh that you run into but obviously tcp and udp are extremely common but once you get into another thing i guess another good example would be when you get into some of the streaming multimedia uh then when you get into streaming multimedia connections those are also um other protocols so data link is talking about the uh the part of the protocol that gets you one hop that's part of the networking protocol data gram is just a packet that gets tossed through the network and that miter might not make it all the way so those are different things data link is the layer in the networking layer data gram is the thing that we're sending it's a packet so back to our sockets here let's take a look at kind of what is involved in this middle part here and actually communicating and then we'll talk about setup and tear down so the problem of getting reliable delivery is that all physical networks garble or drop packets we said that already um so the physical media media um has lots of problems like the packets might not be transmitted or received it might be that multiple people trying to talk at once in which case there's an exponential back off that has to happen um if the if you transmit close to the maximum rate you might get more throughput but you might start losing packets okay and so there's sometimes there's this trade-off between throughput and and absolute reliability there's also if you're in a very low uh power scenario you might transmit an extremely low voltage right on the edge of a bunch of errors uh occurring but you put a heavy forward error correction code on it to make up for that and so there's there's a lot of playing with the fact that these packets are unreliable okay and if you remember from the end and principle again if we put reliability by re-transmitting on the end points it means that things don't have to be perfect in the middle and in fact we may not want them to be perfect we just want them to be good enough that we can re-transmit and get the data through eventually the other thing that's going to be a big deal is congestion so if too many people try to go through too small of a pipe in the middle of the network then they're going to have to stop dropping start dropping packets because the routers will have more input than they can for their outputs and so they're going to have to drop packets that's kind of the ip idea okay so um and there's many options i kind of give here uh insufficient queue space uh a broadcast link with hosts going at the same time buffer space the destination rate mismatches you're sending it too fast and so on can cause congestion and then the way we so we want to start with that we have to start with that we want to make reliable message delivery on top of that so what are we going to do so we're going to need to have some way to make sure the packets actually make it so that every packet's received at least once and every packet's received at most once um and that because uh if we get duplication that we're not aware of or we get dropping that we're not aware of then all of our applications that are relying on that are going to start having problems okay or they're going to have to do the all the work on their own and this is a level of uh this reliability is common enough need that we're going to want to make sure that we can do that in a common facility like tcp rather than having everybody roll their own okay and we're going to show how dealing with misordering the network and dealing with dropped packets and dealing with duplication are actually handled by similar mechanism so that'll be nice so tcp is really a stream okay so the idea is you this is the alphabet right a b c d so you stream the alphabet in you know or your bites in on one side they show up on the other side uh every bite that goes in comes out uh the other side and you know we don't see duplication and the other thing about it being a stream is there's really no um we're not packetizing it it's just you send bytes in and bytes come out and if you care about packets it's going to be up to you to to make a packet protocol where you say well every message in my connection it's going to start with a length and then the data is going to be after that and now i've got a packet okay but that's that's up to you you the user to packetize on your own um now of course underneath the covers is all the ip packets but this trend the tcp view is really that bytes go in and bytes come out okay and there may be many routers in the middle and it just works okay now this is a protocol 6 in that little red ip protocol point that i showed you earlier uh it's a reliable byte stream between two processes on different machines over the internet okay and we get read write flush etc and you know that's exactly with our web server web client example that we gave you with sockets the sockets are going to be the things that connect on either end of the tcp and this is basically going to talk about what's inside inside that process so some details which we're going to go into in a bit but um since the underlying system is uh got a you know a limited packet size and so on it's going to be up to tcp to take your large streams worth of data and fragmented into lots of little pieces sometimes in the middle of the network ip will fragment into further pieces and so we're going to need to make sure that after we've fragmented it we can reassemble it at the other side and we can reassemble it in order it's going to use a window-based acknowledgement protocol and i'm going to show you a lot more about that in a second to minimize the state at the sender and receiver and make sure that the sender never sends more data than the receiver has space for and the sender never sends things so quickly that it clogs up the routers and prevents other people from using this okay and so this windowing is going to be important for both reliability and for being a good citizen in the network and obviously automatically re-transmitting loss packets okay and being a good citizen so without further ado so one of the problems is dropped packets how do we deal with that and again we've said multiple times that we all physical net networks can carbon or drop packets and so ip can garble or drop packets as well and so that means we gotta build reliable message on top of that and so the question is how are we gonna do that well the obvious thing to do or maybe not so obvious one the thing that we do is typically use something called acknowledgements okay and so the idea here is you've got a communicating with b and so a sends a packet to b and then b sends an acknowledgement back okay and what is the acknowledgement good for well it says first of all b got it it says hi i'm b and i got this packet okay and assuming that we put a check sum on the packet then b can also detect garbled packets and just throw them out and um in those instances you could imagine b sending back a knack or a negative acknowledgement in fact what happens is b just treats a garbled packet it's one that just never arrived and uh so that's going to cause the other mechanism to come into play so if a sends a packet to b which gets lost along the way or garbled eventually there'll be a timeout at a and then a will send the packet again and eventually we get an ack okay so some questions about this if the sender doesn't get an act does that mean the receiver didn't get the original message what do you think so just because a doesn't get an act back okay right so i see no i see unknown i say who knows good this is very philosophical tonight so just because you don't get an act doesn't mean that a uh didn't successful successfully transmit something to be like for instance the act could have gotten lost on the way back so what that means is once we do a timeout and re-transmission suddenly we've got duplication as an issue okay so um what if the act gets dropped or the message gets delayed same idea so now all of a sudden we've got issues here now i see somebody asking about byzantine so we're going to assume here in the moment that the network is trying to do its best to act in the way it's supposed to so we're not going to worry about malicious components in the middle or b being malicious so let's just look at the underlying message transmission and then the way we get byzantine agreement on top of that is we build something on top of unreliable messages but let's at least see whether we can get our messages to make it from a to b all right so um what we've just talked about here is what i would call stop and wait so we send we wait for an act repeat okay this is like you know put it into the washer turn it on wash repeat over and over again right so uh we call the round trip time is the time from the sender to get to the receiver and the act to get back the round trip time uh represents basically twice of the transit time of course and um the receiver we can talk about a one-way path which is the time from when the sender sent it to when the receiver got it and so two times d is going to be our round trip time okay and uh we keep doing that and as you can imagine the problem with this is there's a lot of lost opportunities here because we have one packet kind of going at a time okay and how fast can we send data well we can actually use little's law of all things if we've got a b bandwidth and at times a round trip time kind of tells us something about the number of packets that are uh on the wire or waiting in the queue but in fact uh we've set this up so that we only have one going in at once and so the bandwidth is basically one packet per round trip time and this depends only on latency not on the network capacity so it doesn't matter you could you could basically have strings and two cans on it on either side here for all that matters because you know we're not sending very fast this doesn't have to be a gigabit link okay in fact you could do this computation pretty simply like suppose the round trip times 100 milliseconds uh the packet's 1500 bytes you come up with about 120 kilobits per second which is pretty slow okay so this is clearly the stop and weight is clearly not what we want to do we got to get some more packets going okay so if you have 100 megabits per second link you're wasting a lot of it you know almost almost a thousand times so um and the other thing is how do we know when to time out and and re-transmit right so here's a case where the sender sent something they act didn't make it or it got lost somewhere along the way clearly the timeout needs to be at least as long as the round trip time before we start resending uh because otherwise you know we'll resend before getting the act back so that's not so good so we're going to need to be estimating this time out somehow with knowledge of the round trip time and um you know if there are if the timeout is too short you get a huge amount of duplication it's too long then the packet loss really becomes disruptive even if you just happen to lose one packet you wait a huge amount of time to keep going um you're going to really suffer for your communication okay so and then how to deal with duplication i mean here's a situation maybe where the act just got delayed and we went and retransmitted but then the act comes in and we get another ack and now we got two copies at the receiver okay so how do we deal with message duplication well we put a sequence number in okay and this is a very simple b bit sequence number where the sequence number is either a zero or a one and the idea is the sender is going to keep a copy of the data in its local buffer until it sees an ack for that sequence number okay and then furthermore the receiver is going to track uh packets and by having exactly two options a zero or a one then the receiver can figure out if the um if there's a re-transmission because it'll see two packet zeros in a row and it can know to throw one out because it's a duplicate okay so that when we start putting some numbering acknowledgment numbering or sequence numbering onto the packets we can start getting rid of duplication at the receiver and figuring out how long the sender needs to hold on to things to retransmit okay we're going to call this the alternating bit protocol so the pros of this of course it's very simple it's one bit the con is really uh if if the network can delay things arbitrarily then you and you had a packet zero that might gots might have gotten stuck in some router in the middle and then got transmitted later you might not be able to disambiguate uh the um duplication with only one bit so clearly that's a problem and furthermore we're still doing one packet at a time in the network so this this doesn't look great so what should we do here to up our bandwidth and deal with more unexpected delays in the network okay don't wait and send more packets all right i'll buy that but that would seem to make the problem of disambiguating uh duplicates of the receiver worse so what else do we have to do okay yep we want to sort packets later so what do we need in our sequence numbers yeah so we're going to need more than a bit right because one bit you know distinguishing between packet zero and packet one and then repeating with packet zero that's clearly not enough okay so we need a bigger space larger space of acknowledgements okay so that seems simple right it's sequence numbers um and now we've got pipelining possibilities because we don't have to wait for each act before we send more okay so here's here's what we had before you know sender sends receiver receives but now we have the potential to have many outstanding packets and many received packets in a way that basically allows us to fill up the network okay so if you look during this round trip time what you see is during that round trip time we have many packets that are on their way to the receiver and many acts that are on their way back and as a result we can actually fill up the network pipe and start getting our actual network bandwidth back rather than something that depends on the round trip time okay so the acts also are going to serve a dual purpose here so one if assuming that every one of these outgoing packets has a unique sequence number on it then um clearly we can confirm that a particular packet got back here because we see its sequence number and we can do deal with ordering so if we have packet 0 1 2 three four five six seven whatever and they arrive out of order we can reorder them at the receiver side back into sequence number order and deal with misordering okay and so the acts uh in addition to this reliability aspect also help us with ordering okay so this seems like we're going into a good possibility here now how much data is in flight well if you take round trip time times whatever your actual bandwidth is okay that's going to give you uh the window the sending window that basically makes sure that you um you have a lot of data out in the network and um basically lets you fill up the pipe uh both in the forward and reverse direction okay and so b in this case is bytes per second remember this is the something uh we learned in chemistry in uh high school basically you got to match up your your um units so round trip time is in seconds b is in bytes per seconds the total here is in bytes so in this case w send is how many bytes do i want to have in the network uh at once in order to make sure that nobody is waiting for packets okay and so this w send is like the sender's window size and packets in flight if we wanted to count packets instead we could take this sending size divided by the packet size and that tells us how many packets we need to have outstanding to fill everything up okay so how long does a sender have to keep packets around so that's an interesting question right um ah so uh let's uh so the question is how long do we need to hold on to this and the answer is well until we know that a particular packet has been acknowledged right and so certainly we need to have enough buffer space in the sender to have at least a round trip time probably a little bit more in order to allow us to lose some packets and cause some re-transmission okay now the other question is would a timeout result in starting over from the beginning um well what do you think do we need to resend every packet if we lose just one so good so it seems on the face of it that we'd want to only send the ones that haven't been act and because we have labeled every packet with a sequence number then in principle we could figure out which ones haven't been received and which ones need to be acknowledged again okay and so that's certainly plausible for us now it depends on your protocol whether you always have the ability to individually transmit packets or not or whether you have to go back and do a certain range of them or whatever but at least in principle we have enough information to resend only the things that were lost okay now how long does the receiver have to keep the packet's data so the data at the receiver side certainly has to be there long enough to do reordering so if we get a bunch of the later packets we need to make sure we have enough space to absorb the early ones so that we can wait absorb the early ones and then send them an order to the actual application at the receiving side so we need to have enough space for that and also we're going to need to store data until the application's ready so perhaps it hasn't it's you know it's busy doing something else and it hasn't executed a read against the socket yet so we need to hold on to data at the receiver as well and then of course you have to worry about the following what if the sender is blasting packets at the receiver and the receiver just is too slow and as a result a bunch of the data that was sent actually made it to the receiver only to be thrown out at the receiver so that seems like probably a bad idea right so here's a bunch of interesting questions okay so let me talk a little administrivia here just remember um got a midterm not this thursday because uh folks are going to be hopefully over indulging in food on thursday but next week from thursday is going to be midterm three okay and uh camera and zoom screen shattering just like in midterm two we'll mail out all your links there's gonna be a review session link that will come out in the next day or so and everything up to lecture 25 so it's this lecture and the next monday's lecture and we have no lecture on wednesday this week okay and lecture 26 will be a fun lecture so if there's any topics in particular you want to cover let me know and i don't think i have too much more to say on this i have so question about is this closer to a final or closer like midterm too as i think i've said before is every midterm is in principle cumulative uh in the sense that you need to not have forgotten everything that you learned but we will certainly focus on material uh in the last third of the class but we certainly will potentially ask you questions that would require you to have not forgotten everything from earlier parts of the term now um i'm not going to go in this in great detail but uh please be careful with collaborations uh i realize we're getting down to the end of the term but remember that um explaining things to something at a high someone at a high level and discussing things at a high level but not sitting down line by line going through everything um you know if there's a lot of individual syntax transfer on homeworks and pro in between project groups it's probably too much sharing okay and so just be careful all right and don't get don't get friends into trouble by asking them for their code over and over again because you'll put them in a bad position as well of having violated our policy so try not to do that okay i've talked about this last couple of lectures so i don't want to go into it in greater detail so let's keep going on this a little bit so i think the idea of having a big acknowledgement space or big sequence number space and sending a bunch of messages into the network to uh get pipelining sounds like a good idea but if you remember here when we set up um cues or pipes between processes on a local machine we had a queue in the middle and we had blocking because the queue had fixed capacity so if you wrote and uh the queue was full the writer would actually get put to sleep or if you went to read and there was nothing in the queue the reader would get put to sleep and so we would we would like to have something similar to what we had with pipes but across the network and using tcp the question is how do we go about that okay so buffering in a tcp connection we have process a and process b there's a send q on a side and a receive cue for that particular stream and then there's also one going the other direction so typical if you remember sockets are bi-directional and when we set them up we have cues on both sides and we want to make sure there's proper blocking so no data gets overwritten or otherwise lost okay and so a single tcp connection needs four in-memory cues as we just said here and the window size for a connection is sort of how much remaining space it has in the received queue so for instance in this case if this received queue has a hundred bytes left in it the the host is really only allowed to send another 100 bytes until things start becoming acknowledged because we never want the host to basically overwrite the received queue and furthermore just in acknowledging that they've been received is not enough because the received queue could still be full because host b hasn't pulled things out so what we really need to say is we need some way to for the receive queue on either side to tell the sender how much space it's actually got left in its queue and make sure that the sender never sends more than that and that'll prevent us from overriding at the destination okay and so host advertise its window size uh at the receive queue in every packet going the other direction it keeps saying well here's how much i have in my queue now here's how much i have in my queue now and as a result we can do this buffer management so that we never overflow a host or lose data okay so the idea is we're going to build a sliding window protocol so the tcp sender knows the receiver's window size tries never to exceed it packets that it previously sent may arrive filling the window up but we want to make sure there's never more in transit then there is buffer size at the destination and you're allowed to keep sending data as long as there uh there's enough space guaranteed at the destination okay and i'm going to show you how that works in a second here so the idea here is i'm not i'm going to talk about packets of space at the receiver even though normally it's fights and so the window size to fill is uh you know we have a let's say we have a bandwidth of packets per second times the round trip time is going to tell us how much we want to have in flight at once and this is a form of little laws again a little law again to figure out sort of how much we can go with but for instance here's a case where we have an unoc act packet which we're going to call packet one that got sent another uh packet so now the send window is got it says that one and two are outstanding here this says one two and three are outstanding and we're gonna assume that we're not allowed more than three packets at the destination eventually what happens is because one came in in order um we've received it and we potentially sent it up to the application at that point the receiver will say well i actually now have space in my destination for another one okay at which point we'll send another one and so on okay and so this explicit tracking of uh and here the receive queue is basically never um holding on to anything it's sending it up so each one of these acts is basically saying well i still have three available i still have three available i still have three available but you can imagine if the receiver was basically um uh holding on to it but the application the receiver wasn't absorbing it then this queue would start filling up now what if you never get an act from the receiver so what happens in that case is that if we go back to this point um this sender will stop sending because it only knows that it's got three packets worth of space and it'll stop and if there are no acts that comes back then at that point i'll resend i'll start resending from the earliest one that's missing so i'll start rescinding one and then two and three over and over again waiting to finally get an act back and once i got an ack then i can go forward so the the short answer to what happens if you never get an ack is you you go up to the point at which the receiver has enough buffer space and then stop timeout doesn't necessarily it it might reset a little except for the fact that if i time out at this point i'm going to keep resending stuff that's in my send buffer and when it gets to the receiver the receiver knows that there is space for it because it's the first slot in the receiver's buffer queue so i'm never going to get past sending packets one two or three until i get one of them actually act and then i can send back four so it isn't a full reset on timeout it really is a oh some of the stuff that i thought that you thought you sent must not have gotten there because i got a timeout i'm going to resend things okay so the difference between timeout and acknowledgement is timeout is a resend acknowledgement is moved forward and notice how this window here is advancing so once i've got this first act now i've got two three and four here are in my sending sliding window and uh at the receiving side potentially i've got um these guys have came come in here but i'm forwarding things up as quickly as i can and so we're never building up any buffer space at the destination i'm going to show you in a moment what happens if you do build up at the destination now here we go so tcp windows and bytes not packets okay so if you look we can think of the space of sequence numbers now in tcp is not a packet count it's a byte count so what you can imagine remember tcp is a stream so there's a continuous stream going in we have an arbitrary sequence number that we start at and then we can look at this space of sequence numbers where each sequence number represents another byte in the stream okay and so we have the set of sequence numbers representing bytes that have been sent and already acknowledged we have the set of bytes that have been sent but not acknowledged and the set bytes that haven't been sent yet but this is a continuous stream from the initial sequence number incrementing by one each time and then at the receiver we have the same set of sequence numbers okay and so we have this side our are parts of the sequence numbers that have been received and potentially given to the act the application here we have ones that have been received and are being buffered and these are ones that have not yet been received yet so this buffer here in the middle is the thing that we want to make sure we never overflow okay and i'm going to show you how that works in a moment okay all right questions so we're not acting on packets we're acting on bytes and that means we can act a whole group of bytes at once by giving the sequence number of the end of the bytes let me show you and this is this is where packets come back into play but here's an example of uh the receiver's uh receive queue this is an acknowledgment that came back from the receiver to the sender okay and what it's saying here is uh we're on sequence number 100 is the next sequence number that i'm expecting and there's 300 bytes worth of space in my queue okay and so now we're going to send a packet in tcp that says here's sequence number 100 it's got 40 bytes in it so that means that after this packet's received what i acknowledge is i'm going to acknowledge 140 is my sequence number because i've received 40 new bytes from what i had before and furthermore notice that what i'm saying here is that the um the buffer now only has 260 bytes free no longer 300. and as i go again you'll notice that the number of bytes free keeps going down so what that tells the sender is that the buffer on the receiver side is filling up and it's never going to send out more into the network than it knows is available so at this point at 210 it knows that sequence number 190 it can do another 210 bytes above 190 and be okay okay now here's an example where something happened to a packet in here the one that was sending between sequence number 190 and 230 and it's got lost somehow but we sent another one which was sequence number 230 with size 30 and we got back an acknowledgement which might not be what you expected if you look here what you see is the acknowledgement says well the the uh latest most sequential um thing i've received is it's sequence number 190 okay and there's 210 after that that's available so this particular base tcp protocol acknowledges the sequence number that represents a solid set of bytes up to that point and ignores holes and other things that might have been received beyond it okay now um this is useful if you can imagine because what it really says is it's yes it's uh it's got back us you know it got back some data and received some data but it doesn't make sense necessarily to acknowledge this fully yet because uh it's not useful to anybody in the streaming protocol now let's look a little further you can see that this continues for a while and we haven't changed anything about our acknowledgments and the reason for that is we're missing bytes between 190 and 230 and eventually there'll be a timeout we're going to re-transmit the missing data and if you notice what happened there we fully filled in the hole because the buffer at the receiver is doing the right thing and the acknowledgement that comes back now is oh i've received everything up to sequence number 340 and by the way i only have 60 left and so then we can finish this up etc and at some point when we start feeding these up to the application because they did a read of 40 or 30 or whatever it is then these acknowledgements will start coming back and saying oh here there's more space in my buffer so if you ever wondered why when you set up a tcp channel and you start sending data and the other side freezes and doesn't the other application isn't absorbing the data then the tcp channel will literally shut down because it knows that there's no buffer space at the receiver all right so um all right and at that point basically we've shut down because we've filled up all the buffer space and the application at the receiver side isn't absorbing any and so the sender is is stopping at that point and the way this worked out for us is all of the information we need is in this cue size at a given sequence number and so that'll allow us to put in as many bytes into the network as we want in a way that won't violate this notion that all the bytes in flight would fit in buffer space at the receiver so we have enough information to never violate that the only other thing now is to only send enough data out into the network to try to meet that round trip time times bandwidth requirements it's actually the bandwidth is the slowest link in the middle and no more because otherwise we'll start causing congestion okay so here's a question so during the time when the 190 packet's missing let's just go back here what if the sender sends too many packets and causes the receiver buffer to be full since um so the thing here is it's not going to send uh too much it's not going to send 210 bytes past the one it's sent it's sending 210 up to 210 bytes past the 190. so it knows that this is the space that's free and it's that means it knows that past 340 it doesn't have more than 60 here available so it's not going to send anything past what would fit in this and it's up to the receiver to re uh order based on sequence numbers to put things back in the buffer okay now what if you already go beyond 400 before re-transmit again that's not going to happen because uh we are never going to get the go ahead to transmit beyond 400 until the the buffer space opens up here because when we get to this point we will never have sent beyond 400 because uh we will know that that would bring us down to past zero and so it'll never happen and it's only when this opens up again after these have been absorbed by the client that we can start sending again good so congestion is an issue so congestion is because we have too much data flowing through the network okay and if you look all of this different data is all using shared links and so ip's solution here is to draw packets and the question might be what happens to a tcp connection well you end up with lots of re-transmission so if you drop lots of packets what you saw there is you end up with lots of re-transmissions by the way i should say back here on this particular example i want you to notice that the sender knows where the data was missing because it knows that it was at uh sequence number 190 and the moment it sends that missing data notice that the acknowledgement went way all the way up to where uh it is still missing and so um at that point the sender is not going to re-transmit this remaining stuff it's going to pick up where it left off and so we don't get duplication there okay and there are protocols that let you know more about more holes than one at a time but we won't go into that now so with congestion we need to limit congestion okay and so why do we get congestion well there's shared links in the middle and there's too much data going into the shared link and so whatever router is at one of these shared points starts dropping packets and so what we really want to do is we want to back off so that we don't send too much data and so we want to back off so that everybody that's sending together the rate uh doesn't exceed the rate of the router and outgoing links okay and so that's a congestion avoidance property and so we can really figure this out like how long uh should a timeout be for resending messages um so clearly if it's too long we waste time if the message is lost if it's too short we re-transmit even though an act will arrive shortly so we need to be tracking the round trip time clearly but there's a bit of a stability problem here so if there's more congestion then acts are delayed and you start getting timeouts which send more traffic which cause even more congestion and you start um getting this positive feedback loop that causes uh everything to break down okay and so you got to be very careful to choose the sender's window size not the receiver but the sender how much data it's going to allow to be outstanding so as to avoid congestion to avoid this positive feedback loop and obviously the amount of data the sender can have outstanding has got to be less than what's at the receiver so we don't over flood it but it's probably going to be less because we're going to be trying to match the amount of data we have in the network with the round trip time and the bandwidth of the slowest link in the middle so we're going to try to match the rate of sending packets with the rate of the slowest link there's an adaptive algorithm which is going to adjust the sender's window size and there's a lot of interesting things a lot of interesting algorithms that have been developed over the years to deal with that i have one up on the reading for for tonight the van jacobson paper starts talking about this a little bit if you're interested but the basic technique is going to be i'm going to start small and i'm going to slowly increase the window uh until i start getting acknowledgments missing so once i've got that to be too big i know that i'm sending too fast i'm going to back off and that's the basic way that these adaptive algorithms try to get enough data in the network to make maximal use of that slowest link but without causing congestion okay this is called slow start which is uh you start sending slowly and uh typically what happens is when you start receiving um uh when you start receiving acts being lost then you cut in half and you work your way up and so typically there's the sawtooth uh behavior as it's trying to adapt and figure out what the right amount of data to be in the network um the cool thing about these kind of adaptive algorithms is that if a new person comes along all of a sudden the ac the axe will be lost you'll start losing packets both uh will back off until they hit a situation where they're both equally sharing the link in the middle and that's kind of the way these congestion avoidance algorithms work and so you can take a look if you actually measure what tcp does you get this typical sawtooth behavior around around the right bandwidth for that middle link so the question here is aren't ax more more likely to be timed out with smaller windows i'm not sure i fully understand there the acts are coming back in the other direction and the acts are basically reflective what's happening is when you see that the same act comes back over and over again you know that the data you sent out got lost and so that's that's the notification that the forward uh packets have been lost and that's the point at which you make some decision to back off the amount of data you have in the network okay now so if you recall the setup remember this where you request the connection the server socket's got is listening um it takes the connection it constructs a new five tuple style uh of connection between two sockets and then it lets you go and so remember the five tuple is a source i p address destination ip address source port destination port and protocol like tcp and that setup is really setting up a tcp channel okay and so what does that mean so to establish we have to open a connection that's a three-way handshake then we do what we've just been talking about which is transmitting data back and forth and then we tear everything down when we're done okay and so here we're back to this client server but now let's look at this part which is the setup okay and it's really a three-way handshake so the client uh so the server is causing a calling listen over here the client calls connect which sends a a request over all right and it looks like this it's a syn synchronous bit is set in the header it proposes a sequence number for communication from client to server the server accepts the connection it sends back um an acknowledgement on that forward sin and a new sin for the other direction okay with its proposed sequence number all right and then finally there's an act coming back so this last ack is acting the server's uh connection um from server to client so it's three uh three messages and when you're done you've both agreed on a sequence number in the forward direction the starting sequence number in the reverse direction and you both agreed that this is a connection that's going forward okay great the other thing is just to show you the shutdown so shutdown's actually a four uh hop thing here so when host one is done it sends a fin bit in the header the host uh acts the fin bit but it also or acts the fin bit and it sends its own fin bit um so this is a finnex excuse me and the remaining data uh and then eventually it closes things down with the fin and you get a fin echo in the other direction so there's actually four uh control messages to shut down okay and then eventually after a timeout everything's deallocated so all right and i'm not gonna um not going to go any further on this but just like regular files if you have multiple file descriptors open on a socket then the sockets only really shut down when all of them close okay so how do we actually program a distributed application so we need to synchronize multiple threads on different machines um so if you remember uh this is from last time i was talking about messages and so now we've got this idea of how to build a reliable stream in both directions and so the question is now what next well suppose we want to build an application on top of this well one of the things that comes up is what's the data representation so an object in memory on one side has a very machine specific binary representation that may mean nothing at the other side so if you're trying to send data from host a to host b and you want it to be understood on host b what are you going to do well you're going to have to agree on some standardized way of communicating with each other okay and so the absence of shared memory externalizing an object require us requires us to basically take an object which think of a linked list for a minute right it's a bunch of uh objects that are all linked together with addresses and all that sort of stuff and we need to serialize that into bytes so that it can be sent over the uh over the link okay and the serializing into bytes and then marshalling it together into an object the object together into a message and then sending it off uh is what you do at the sender side on the other side you unmarshal so you take it apart and you deserialize it back into a local representation on the other side and it's possible that the two communications are um or excuse me the two hosts have different representations like one might be big endian and the other small indian i'll remind you what that is for a moment so this serializing and marshaling process has to be done in a way that allows the two hosts to communicate no matter what their representation for various things are okay so simple data type let me just show you this for instance suppose you got a 32-bit integer and i want to write it to a file so let's back off from sockets for a second you open the file okay that's all find dandy and then you have a couple of choices one you could actually print it out as an integer in ascii text the other is you could write it as a binary in with four bytes okay and those two things look very different in the file and the person the person the application that reads it back in uh needs to know which it is otherwise it's not going to be able to interpret them okay so neither of these two things are wrong but the receiver needs to be consistent okay and this gets even more tricky when you're going across the network because if i'm trying to send you know a four byte number 32 bits across the network how do we know that the recipient has x in the same way okay like for instance if you remember from 61c they talked about indian-ness like several of these different types of machines are big endian the number are little indian and the question is sort of how do we match those up if we're trying to communicate okay here's a good example of a of a little endian machine where we take a an integer uh ox12345678 and then we uh we scan through the uh in memory representation and what you see is these the first byte of that in memory representation is a seven eight so it's actually the uh the least significant bite of the integer is actually in the first byte in memory so this is clearly a little endian machine okay and you can write this endianness routine on your own and try running it okay and see what you get so what endianness is the internet well the internet has chosen big endian as the standardized network byte order and so typically what happens is when you're sending something across the internet you actually put network byte order uh you put things in network byte order and then the other side unpacks them from network byte order into its local host order so um so you have to decide on wire ending this we just decided for instance if we're talking across the network it's typically big endian and then we convert from the uh native endianness to the on wire format that's in the source side of the communication and then we unpack it on the other side from the on wire endianness to the local format now a downside of this perhaps is the fact that if you take two little indian machines and they communicate over the network they're both going to uh convert and uh convert to and from big endian to make that communication happen so the question is what's the is there a rationale for big endian versus little endian on the web or do you mean in different processors uh you know the web if you're asking why why it was big endian network byte order um i think the good thing about big endian is you can look at uh numbers in if you were to take a hex dump of some memory and you look at a big endian number you can just read it directly out so big endian kind of has that nice property that it's uh it requires a little bit less brain gymnastics to read through a memory dump that would be my my only explanation of why that was preferred i don't know i guess at this point it's all about standards and so i could just say well it is what it is and we got to stick with it but um i think probably people like big endian because you can read it directly out now i grew up with little indian processor assembly language design when i was younger and so um i'm not as thrown for a loop when i see little indian numbers because i rescrambled them in my brain and it mostly works okay but anyway i think that's the reason people like the big endian because you don't have to re-scramble what about richer objects like lists and whatever what do you do well if you want to transfer a linked list of structures from point a to point b you got to come up with some standards for serializing that so that they can be packed and unpacked and there's lots of serialization formats there's json and xml you name them in fact if you were to google data serialization you'd find a whole bunch of different types of serialization so there are many languages there are many serialization formats so of course this is a new issue with standardization you have to make sure that when you're using a serialization mechanism from point a to point b you actually do the right thing to uh to do that serialization okay now um so raw messaging where you just send a message from one side to another and then you build something out of it is pretty low level for programming um you have to do a whole bunch of stuff on your own and you also have to deal with machine representation by hand calling the things we said back there the alternative is a remote procedure call idea which you call a procedure on a remote machine and the idea is to make communication look like an ordinary function call and you're going to automate all the complexity of translating between representations okay and so uh for instance the client might call a remote file system read rutabaga and at the remote side the thing reads the file rutabaga and sends the results back and as far as the client and even the server is concerned they're just executing a function call and getting a return okay so that's called a remote procedure call and that concept here is pretty simple so here's a client it wants to execute this function of two arguments which turns out it's going to be on a remote machine what's going to happen is to call it's going to go through what's called a stub which is going to marshal all of these arguments v1 and v2 and put it into a standardized serialization format of some sort send it to the receiver the receiver stub is going to unpack it call the the function on the server side server is going to give a return value we're going to go back the other direction okay and then return at the client and if you notice um really these stubs are things that are just linked into the client and the server like regular uh library function calls and they have this nice property that when you link function f with this stub what really happens is when you call f it ends up sending and receiving messages okay and the server when it links with the server stub is really going to end up giving its functions to be called by remote clients and however when you write the code inside the server you're going to just be writing normal functions okay and so this is this is basically the this is basically the idea of remote procedure calls so as far as the client's concerned they're making a procedure call but it's happening remotely okay and so really we can talk about the client stub interacting with handlers that send messages across the network on multiple machines and that this is really a machine machine boundary okay and really there's also a application application boundary so we're going to wrap some ports in here as well now can you use rpc for inter-process communication on the same machine absolutely okay and what's kind of cool about this that's a good question is really that you could start out with this server on the same machine and then if the machine got overloaded you could migrate the server to another machine and as long as you clean up the packet handling stuff so the packets are now directed at that remote machine instead of the local one you don't even have to change the code all you see is a change in performance okay now um so the way that this implementation works in general is request response message passing under the covers this stubs on both sides are providing glue on the client and server side to glue functions into the network so the client stub is marshaling the arguments and unmarshaling the return values where marshaling is putting into a packet taking out of the packet they're also responsible for doing the data representation serialization we talked about the server stub does the the opposite okay so marshalling involves converting values to canonical form serializing the objects copying them to be passed by reference etc and so some details here there's an equivalence really between you know the parameters of the function call go into a request message the result is a reply message the name of the procedure is typically passed in the request message and is used to decide at the receiver stub which function gets called there are mailboxes on either side so you need to know both the ip address and the port on each side in order to do this connection the interesting part about this is there's a stub generator which is really a compiler that generates stubs so what you typically do is you define your rpc with a interface definition language or idl which contains among other things the types of arguments the return values etc the output is going to be stubs in the appropriate source language and when you you design your interface by writing in the idl and then when you produce out of the compiler you now have code that you can link in it both to the client and the server side and now you're able to do rpc okay um so the way we deal with cl cross-platform issues is exactly what we just talked about we're going to convert everything to and from a canonical form and this is where your particular type of rpc so there are many types of rpc out there will define as part of it what is the canonical form or what is the way that things are serialized okay so that's a part of the rpc package so how does a client know what they're connecting to typically you translate just like with regular dns and ip you're translating the name of the remote service into a network endpoint remote machine port maybe some other information and the process of binding is the process of converting basically a user visible name for that service like a file server or something else into a network endpoint like an ip address and port and then connecting it all up and so then once you do that now the client can just be doing procedure calls and they're going to the remote machine okay and this is another word for naming and you could either compile in the destination machine or you could have a dynamic check at runtime now the question is when are the stubs initialized so the stubs get linked into the program and they get initialized kind of before you actually start actually executing code that has the rpc in it so there is this initialization process which you would call into the rpc library to do the initialization stuff and once it's now connected then you can make your calls so um this dynamic binding uh is good because most rpcs use dynamic binding via some name service just like if you're interested in you know www dot you go to a dynamic dns service to find the current i p address there's most rpc systems have a dynamic binding service where you say what service you're interested in certain file service of a certain name and it will figure that out for you through a binding process and decide what the actual ip address is and so on what the port is why do we do this one and we can do access control to basically dot even give back the names of machines if people don't have access the other is failover so if the server fails we can basically fail over to another one just by changing the binding if there's multiple servers you can have flexibility of binding time so i mentioned uh last time or time before that google does this a lot when you go and do a google search and you do it from northern california versus i don't know boston you're going to get different places for google um in fact you're even going to get different times of the day you might get different service server names or the same server name different ip addresses from the google resolution and what they're doing is they're balancing load that way and so that's why a dynamic rpc service is good that way as well okay i think that's all i wanted to say there so what are some problems with this idea so this seems really cool different failure modes in a distributed system than on a single machine so you know think about the number of different failures if your maybe a user level bug causes an address space to crash at the other side or a machine failure kernel bug causes all processors on the same machine to fail or some machine is compromised by a malicious party so in the old before rpc what you'd end up with is a crash as a crash as a crash pretty much everything fails after rpc you're now reaching out to different services on the network and it could be that you get partial failures because only some of them are working okay now the question here does rpc usually run over tcp uh it either runs over tcp or if it runs over udp which it can occasionally it's got to have its own reliability protocol underneath to make sure things work so it often running over tcp is certainly the simplest thing for it to do so before rpc the whole system would crash and die after you got partial failures okay and so you end up with an inconsistent view of the world and you're not sure if your cache data got written back or not you're not sure if your server did what you want and so the handling of failure gets much more complicated in an rpc world but you gain the ability to have your services handled from many places okay so the problem that rpc is the solution to again is that our pc basically gives you a nice clean uh way of looking at remote communication just as a file as a system call excuse me strike that it basically lets you look at remote communication as a procedure call and that procedure call you don't have to worry about marshaling the arguments you don't have to worry about serializing you get the return value back and so your code is nice and clean it looks like a bunch of function calls okay the downside is you need to make sure that you are able to track failure modes carefully and i will point out by the way that there are a lot of services that use rpc precisely because of the cleanliness of its interface and because it's very easy as i said to migrate where the services are from the local machine to remote machines without changing any of the programming it's just that there are potentially more complicated failure modes that you have to be careful about and you can do all sorts of interesting things with distributed transactions and byzantine commit and stuff we've already talked about to make your rpc uh much more much less failure phone so rpc is not performance transparent right so the cost of a procedure call is very much less than the cost of same machine rpc which is very much lost less than network rpc so there's overheads of marshalling and stubs and kernel crossings and communication that come into play so there is a cost to rpc but the transparency of location is a pretty powerful benefit and so while programmers need to be aware that rpc is not free it still is used in a large number of circumstances and for one thing that i will point out here is um now we have a new way for uh communication between domains um we talked about shared memory with semaphores and monitors we talked about file systems we talked about pipes and now remote procedure calls can be a way to even do local communication and so uh you can use this communicate between things on the local machine or remote machines and just to give you a few there's many rpc systems there's corba the common object request broker there's a dcom which is distributed com you'll see that in windows machines a lot there's rmi which is java's remote method indication there's a lot of different ones out there and one thing i will point out is uh in the early 80s i would say there was this notion of microkernels which we haven't talked a lot about in this this term yet but um basically this monolithic kernel that we've been talking about pretty much puts all the protected code into uh the kernel address space and applications run on top of that and they make system calls into the kernel the micro kernel is a little different the only thing that's in the kernel itself is uh thread multiplexing address space control and an rpc service and so in addition to regular applications all of these things that we used to think belong inside the kernel we now put as processes running on top of the micro kernel and using rpc to communicate with one another and so if the application goes to read a file what happens is it doesn't open by doing an rpc into the microkernel which then um talks with the file system that file system does the open sends back a handle to the application etc and so the application is reading and writing from the file system but doing so uh basically through an rpc mechanism to other user level uh processes okay and why do this well um fault isolation so if there's a bug in the file system it won't crash the whole the whole um microkernel right it's only going to crash part of what's going on or if there's a bug in the windowing system okay or other parts of the kernel we basically have isolated the ability of faults to propagate because we we isolate them in their own user level address space and we use rpc back and forth okay and it enforces a level of modularity as well okay so this is a good example of using rpc on local machine to to help with the overall structure in the kernel okay all right now if you'll bear with me for just uh one or two more slides i want to set the stage for what we'll talk about on monday once we've got a good messaging service and a good way to do uh you know uh serialization and deserialization across the network we can now start talking about how to build distributed storage and the the basic distributed storage problem is the following we have a network with a lot of storage in it so you guys can start thinking about all the cloud storage that you have out there and we have a series of clients that are all using that storage and the we can start asking some interesting questions about this so first of all why bother with this well this is the ultimate sharing scenario because these clients can be using that data that's in the middle of the network no matter where they are so they could be in the the west coast here using some data and then they get an airplane and uh hopefully are careful with their social distancing and their masks and they get on the east coast and now they can read their same data or they can can be uh traveling and their data can be read and written while it's going and so this idea of network attached storage is a very powerful one okay but it's a little different than the type of file systems that we've talked to in this term about in this term so far so among other things there's a what's colloquially called the cap theorem okay this was from eric brewer in the early 2000s and the idea is that there can only be three there are three ideas okay consistency availability and partition tolerance and you can only have two of them at a time in any real system so what consistency means is that changes to a file or a data base or whatever appear to the same uh to everybody in the same serial order that's consistency availability says you can get a result at any time and partition tolerance says that the system will keep working even when the network gets split in half okay and the problem that you encounter when you have a distributed system like uh distributed network storage is you start worrying about partition tolerance uh you know what happens if the network is split and you know if you are going to be able to keep going while the network is split then you're going to lose one of consistency or availability okay so you can't have all three at the same time this is also otherwise known as brewer's theorem so you can think pretty easily think about this for a moment so suppose that i want to always have availability so i can always use my file system and i want to be able to deal with partitions when the network is split you can see why consistency might not work right because if i split the network in half and these clients over here are busy writing data and these clients over here are busy writing data then i'm not really getting consistency because the file not consistent it's got two different views of it on different coasts okay so that's one example of being able to only have two things if i want to have consistency and partition tolerance for instance i want to be able to make sure i always see a consistent view but i can deal with partitions in the middle can anybody explain to me why i lose out on availability when i do that why would i lose out on availability yep the reason i lose out on availability is because to be consistent and deal with splits in the network then i can't write anymore and so it's no longer available to write because i can't allow there to be an inconsistent view very good all right so we're going to pursue this next monday on our last official class we're going to talk a lot about distributed storage solutions like the nsf and our nfs and af-s we'll talk about key value stores and um and probably in the final lecture on wednesday of uh a week next week which won't be responsible for on the exam but we'll talk about things like cord and can and some of the other distributed storage systems out there all right so in conclusion we talked a lot about tcp which is a reliable byte stream between two processes on different machines over the internet so you get basically a stream and it doesn't matter whether it's local or remote you get the same view of it and we talked about how to use acknowledgements with windows based acknowledgement protocol and congestion avoidance to make sure that this works well and represents good citizenship we talked about remote procedure calls which is how to call a procedure on a remote machine or in a remote domain and give us the same interface as procedures but remote okay um we started talking about the distributed file system and the cap theorem okay and next time we're going to talk about uh virtual file system layer and cache consistency and how we can basically build a file system into the network all right i'm going to end there i hope everybody has a great thanksgiving we will see you a week from today back on monday and i hope everybody gets a little bit of a break and enjoys themselves |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_65_Concurrency_and_Mutual_Exclusion_Supplemental.txt | hello everybody um welcome to a um quick little supplemental for a lecture from last night um i didn't quite get through all of the topics i wanted to and so i thought i would record this just to help out a little bit maybe for um for your uh laboratory number one and putting together project one uh design docs so um what we were talking about yesterday was uh concurrency and mutual exclusion and we're sort of starting that whole process of how to get synchronization to work properly and i just wanted to finish up that discussion if you remember we sort of introduced this issue that while threads can give us the ability to have overlap from i o and computation and therefore seem like a really nice efficient way to have multiple things happening at once we run into an interesting issue so i had introduced this banking uh example where for instance what we did on a deposit was we would get the uh account information we'd increment it uh because maybe we were doing a deposit and then we'd store it back and uh the reason threads were really going to help us out here is because if one user was uh you know their account was stuck waiting for disk io another user could be adding to the balance or whatever and so this was our motivation for going to threads except as you see here the the threads encounter the fact that the accounts are shared state and so in this particular instance we show an example here where perhaps thread one gets the balance and then thread two gets the run and it grabs the balance increments and stores it back and then thread one gets to run again and as a result the whole operation that was thread two is erased because thread ones overwrites it i mean if you remember i talked about the malicious scheduler viewpoint which is you need to view the fact that there is a malicious murphy's law scheduler running anytime you have multiple threads working on the shared data that will find a way to find that sequence that corrupts your data and it will do so at the worst possible time so in this instance our malicious scheduler found a way to split up thread one and thread two okay at exactly the wrong times now what we clearly need to do is we need to put an atomic section in here which make basically says that three instructions load add and store all become atomically bound together and can't be interleaved okay and so to do that we talked about locks and locks have come up earlier in the in the um term as well and a lock is in general prevents somebody from doing something excuse me and uh in the context here you imagine locking before entering a critical section unlocking when you leave and waiting if something's locked and so the important idea that uh you ought to get from these lectures on synchronization is that pretty much all synchronization problems are solved by weighting so if you look back to our bad example here the fact that thread one started doing something and then thread two popped in and screwed everything up and then thread one got to go again if thread two were to just wait uh until thread one was done with this atomic section we would have resolved this particular bad behavior okay and so all uh synchronization problems can be solved by waiting and the trick is to wait only as much as you need to not too much and we'll talk a lot about examples where you wait too long and of course when we talked about locks yesterday i mentioned the fact that you need to allocate and initialize a lock so you could for instance on the left here you might declare a myloc structure and run a lock and it on a pointer to it or on the on the right side you might declare uh p thread mutex t and then uh initialize it in one way or another that depends what on which type of locks you're using but then once you've done that the locks provide a couple of atomic operations one is acquire which you know for instance in c syntax would take a pointer to the lock you're acquiring and when you do that your wait until the lock is free and then you grab it and if you try to acquire the lock when somebody else has marked it as busy then you wait and wait in this context is going to mean your threat is put to sleep so you're not wasting cycles and we'll talk about that in the next lecture and then when you're done you release the lock which will then free up potentially somebody who might be waiting and so uh looking at our banking problem again what you see here is we identify uh the critical sections so a good critical section here is uh the fact that we want to have an atomic uh sequence of this uh getting of an account incrementing and storing back and we uh decorate around it the acquisition and release of the lock and so we acquire the lock at the top and we release it at the bottom and by acquiring and releasing a lock what we've done is we've ensured that only one thread gets to run at any given time in the critical section and that's what we call asserting mutual exclusion and so just to show you that a little graphically here we've got an animation here's the critical section with an acquire and a release and if you have multiple threads that are all trying to get into that critical section so say here thread a b and c what happens is only one of them gets the lock and the other ones are forced to wait so for instance if thread a is the one that gets the lock what that really means is not only do they mark the lock as busy but they're allowed through the acquire operation thread b and c are uh waiting in a choir so they um their threads the acquire assist system call or whatever it happens to be we'll talk about many options uh starting next time they will be waiting there so they won't emerge from the acquire yet okay so that again looking up top here if multiple threads all call deposit at once only one of them will actually get through the acquire and into the critical section the rest of them will be waiting in the acquire uh function call or system call and so what i show you here is as soon as a exits then b is allowed to go through and then as soon as b exits then c is allowed to go through and what ordering do things come through the acquire operation well unless it's a special type of block where the semantics are explicitly specified you have to alert uh assume that there's a non-deterministic choice as to which of the threads that are all waiting on the lock are allowed through at any given time the important part however being that only one of them is allowed through okay and to circle back and finish up this example in order to make this really work the uh it's the account that is the shared data here and so we need to make sure that all uh all code that accesses the account is protected by the same lock okay so for instance there might be a uh withdrawal here or there might be a an initialize uh or some other account operations we need to make sure that they all use a lock from an acquired release around a critical section and in particular they need to use the same lock okay all right now um some definitions that we had last time last night as well so synchronization is basically using atomic operations where an atomic operation is a sequence of non-interruptible uh non-interruptible instructions to ensure cooperation between threads to make sure that we don't get undefined behavior okay and mutual exclusion is the technique that we talked about here where by putting locks around a critical section and making sure that we exclude all but one thread at a time through that critical section then we can make sure that we have an atomic operation there and that our synchronization works okay and so that critical section is typically the piece of code that's being protected by an acquire and release of a lock and we put locks around it to get mutual exclusion to give us our synchronization okay now here's another concurrent program example you got two threads a and b they compete with each other one tries to increment the shared counter the other tries to decrement it and then we've got this kind of a free-for-all the thread a and thread b show here we have uh basically they're sharing the same variable i so in this instance it's a global variable but thread a sets it to zero and thread b sets it to zero so they're both setting the same shared variable to zero then uh we're in a while loop and sort of while i is less than 10 for a it sort of increments i and b says well well i is greater than minus 10 it decrements and whoever wins gets to say a wins or b wins okay and we're going to assume that memory loads and stores are atomic but incrementing decrementing are not atomic and so from that standpoint there's no difference between i equal i plus one and i plus plus those compiled the same underlying instructions uh which is a multi-instruction sequence okay in most cases um and so what happens here is uh well either of them could win and in fact we've got kind of a funny scenario where it's not even guaranteed that anyone can win okay because um if you look at a hand simulation of this example we could look at the inner loop and here we have the example thread a and b thread a might load from wherever i is and into register one which uh maybe it's got a zero there thread b does the same thing we got a zero remember we initialized i to zero and then uh thread a basically adds one to it and meanwhile thread b subtracts one and how could we get this perfect interleaving well um in this perfect interleaving could happen if we have two uh two cores that are running or maybe we have uh hyper threading which we talked about last night as well and then of course thread a goes ahead and stores a one because it added one to zero and got one but thread b now stores a minus one and notice that because of this interleaving thread b ended up completely overwriting the result of thread a so thread a went to all this trouble and then nothing happened okay because thread b overwrote it so this is clearly a failure of atomic sections you know when you just imagine this race and we're off a gets off to an early start b says better go fast and tries really hard a goes ahead and writes one then b goes and writes minus one and a says huh i could have sworn i put a one there okay this is uh indicative of the types of problems that happen when you've got data races going on data races i'll show you in a second here is basically two threads attempting to access the same data so that's basically this memory location m of i uh where one of them is a right and here we have a situation where uh not two of them are rights okay so that's a data race and the notion of simultaneous is really defined even when you only have a single cpu and you can't have simultaneous execution like this shows here in this example above but the scheduler could switch out at any time so you effectively have all of the liabilities as if you had simultaneous execution so this may be concurrent but not parallel but it still behaves badly even in those instances those are race conditions okay so we could pull out our locks now and we could say well here i'm going to put a choir and release around the um around the increment or decrement and now did we do better okay well here now we no longer have an example where a thought it was incrementing but ended up doing nothing because b overwrite it overwrote it because we've got locks so thread a gets to the acquire first and it's busy incrementing then thread b gets to acquire it's going to have to wait until a is done then it'll release the lock and then b will get to go through the acquire and do its decrement so each increment and decrement operations now atomic that's good okay um and in many cases this might be what you want technically there's no longer any race condition here because it's never possible for thread a and thread b to be simultaneously accessing i when one of them is a right and why is that well because the simultaneous access can't happen because the the locking is going on here all right but um the program is still broken potentially um because this is uncontrolled okay uh a and b are just incrementing decrementing incrementing decrementing there's really no control as to how many loops there are or who wins and so maybe technically you've gotten rid of the race condition in the middle although there is this uh looking at it in the while loop so i suppose maybe you could still call it a race condition but it's probably not really what you wanted this is really still a bad program the one instance where you might want something like this not with this loop but maybe the i equal i plus one with a lock around it is for instance when you might have 100 threads that are all working on some part of a problem and each one of them wants to get a unique number once it starts then they could call an atomic section like this which um does an i equal i plus one and returns the result back to the caller and now each of the threads some fred will get one some thread will get two some thread will get three some thread will get a hundred uh if you do that then this could be an okay use of something like this and it turns out actually there are atomic instructions that don't even require you to do the lock and unlock in those instances okay so um one more locking example uh here is this red black tree uh that we talked about in one of our early lectures and i also mentioned this last night and in this instance this tree is balanced in a very special way that the red black algorithm maintains okay as you're inserting and deleting elements and if we allow uncontrolled access simultaneity or race conditions to ch to screw up the structure of the tree then it's not going to work properly anymore it's not going to have the level of balance it's supposed to have and so what we can do is we can put a single lock at the root and um just make sure that before we touch the tree at all we acquire the lock here for instance insert the number three and then release it uh maybe over on thread b we could we want to insert four we could acquire the lock uh insert release maybe we want to get the number six we could acquire the lock search for six and release and what we've done is by putting um acquire and release of the same lock around all operations we make sure that at most only one thread is ever manipulating the tree okay and so our critical sections are anytime the tree is accessed either read or written we put locking around it and therefore we make sure that the correctness of the tree algorithm is as good as a uniprocessor non-multi-threaded version okay and so this is a good use of locking even though the threads are busy adding and removing things in this instance it makes sense because the different threads might be grabbing data from the network somewhere adding it to the tree searching because of some network query looking in the tree whatever deleting from the tree that makes some sense if we have all of these threads doing parallel operations and we make sure that the tree that's at the core of that algorithm is stable okay and um so this makes sense you might say this is a little slow because if you have a lot of threads maybe most of them are waiting in the acquire of their you know of their operations and so um maybe most of the threads are waiting and so then you can start to ask a question is there a way to make this faster well the answer might be yes that answer might be well you lock a certain path in the tree so you put a lock on every node and when you're searching or you're modifying then as you go down the tree you lock the nodes so that anybody else who tries to go there um doesn't encounter the same locks that you do they might be able to work in parallel you got to do that very carefully okay um for instance if you always start by locking the top lock and then work your way down then of course you haven't gained anything so while there are ways to do uh locks on uh lots of nodes in a tree like structure in a way that keeps the um keeps the data structure consistent under a variety of different simultaneous operations you have to be very careful to do it so this idea of putting a single lock at the root hopefully makes perfect sense to everybody you know for a fact that that will always be consistent the moment you start trying to parallelize this and allow more than one thread modifying it into the tree at a time then you got to be really careful and you can start talking about maybe somebody who's reading does some advisory um does some advisory locking that's all about reads just so that if a writer were to come along they are not allowed to touch the part of the tree that the writer that the reader is in and maybe that's an okay way to get some parallelism but i just wanted to warn you that if you go down this path you got to make sure you're careful what you're doing okay enough on that let's ask ourselves if locking is going to be the general answer okay and i'll tell you right now it's not locking is a way to do synchronization and it's a way to ensure critical sections it's not always the easiest thing to do so let's look at this producer consumer idea where we have a buffer that's finite size and we may have many um producers of data that want to put the data on the buffer and many consumers and the producers can produce things and the consumers can consume things running perfectly in parallel and all that we really want to do is we want to make sure that if the buffer is entirely full then producers are put to sleep uh because they can't put anything in a full buffer and so uh similarly if the buffer is completely empty then a consumer gets put to sleep because they can't take anything off of an empty buffer so we want to make sure that there's still correctness here okay and we certainly don't want the producer and consumer to have to work in lockstep so we want to do something that's a little more sophisticated than every producer consumer first grabs a lock that's associated with the whole buffer and then releases the lock okay that's gonna put us in that same problem that we kind of saw with the tree data structure so what are we gonna do okay and there's many examples of producer consumer we talked about pipes which i'm loosely showing you here with my gcc compiler example uh where the cpr processor and and the first and second phases of the compiler then the assembler and then the loader all feed into each other and one produces results that are forwarded through a buffer to the next to the next to the next that's a great example of this bounded buffer the example i'm going to do here just because it's fun is a coke machine the producer can put in only a limited number of coke bottles because the machine only holds so many the consumers can't take uh coke bottles out of an empty machine and so what do we do okay and examples other examples are web servers and routers and you name it this bounded buffer is a good example okay so here's an example of a circular buffer data structure where we have a right pointer and a read pointer and we set this up so that the read pointer kind of points to the next thing to be read off the cue and if you keep reading you'll circularly wrap around and if the if the read pointer ever runs into the right pointer then it knows that there's no data there and similarly if the right pointer ever runs into the read pointer it knows that things uh are full okay and so uh the you know the start on this is there's a buffer structure there's two integers a right index and a read index and then there's an array i'm roughly saying you know of some type star entries that's a buffer size and notice that this is not a valid c code obviously you can't say arrow type arrow although you might in some other language and so we might ask some questions how do we know if it's full on insert or empty on remove and what do you do if it is you need to put threads to sleep and put the producer to sleep or the consumer to sleep and uh what do we actually need for our atomic operations okay so this is a clear question uh that comes up based on what i said there earlier and so uh here's our first cut okay we'll have a mutex which is a lock on the buffer it's initially unlocked and the producer might do something like this they grab the lock they sort of spin in a loop saying well while the buffer is full don't do anything okay because remember our producer can't put data into a full buffer and then once it's no longer full we enqueue an item on the queue and then we release the buffer lock and then a consumer looks similarly where we acquire the lock on the buffer we wait and as long as it's empty nothing happens otherwise we know that it's not empty that means we can dequeue and then uh we're gonna release the buffer lock uh when we're done and return the item okay and so notice that what we've got here is uh when the producer can't put anything because things are full we're going to spin and when the consumer can't get anything because it's empty we'll spin and so that's the weight right so i remember i said all uh synchronization problems the solution has some form of waiting this looks like this helps us okay but not so well if you think about it because look at this the producer acquires the lock and then goes into a spinning weight loop then they have the lock acquired and they're in an infinite loop which means they're waiting for the buffer to um get emptied a little bit except that if a consumer comes along it's not going to be able to acquire the lock because the producer's got the lock so this consumer is going to go to sleep waiting to acquire the lock forever and this producer will be spinning forever and we've effectively got a deadlock here okay so or it's really technically a live lock um but it's a live lock that can't resolve and so this is uh not a good solution so we got to do something else okay so here might be a solution uh and if you notice what's different here is the producer acquires the lock and then says well if the buffer is full i'm going to quickly release and then reacquire the lock and then check again and release and acquire and so notice and the consumer's got a similar idea here and if you notice why is this better well this is better because let's suppose that the producer is trying to put something on a full queue they first acquire the lock they notice the queue is full and at that point they release the lock okay they reacquire it and then they check again and they keep doing that over and over again until the buffer is not full and then they continue and the reason this works not very gracefully and not very well is that the consumer let's suppose that things are full so the producer acquires the lock notices they're full the consumer comes along and yes it could acquire the lock or try remember in our last example they couldn't because the producer was holding the lock but if the consumer comes along and goes to sweet sleep in the acquire then the moment that the producer says oh buffer full release it releases the lock at which point the consumer comes out of the acquire and now it has a lock okay and it's going to notice probably that the buffer is not empty because we know it was full and then it can dq and go on and so that release is actually going to release and let the consumer go and this reacquire will potentially temporarily go to sleep until the consumer here finishes dequeuing and then releasing at which point will come out of the acquire we'll notice the buffer is no longer full while in queue and go on so surprisingly this works okay and this actually uh works in a variety of circumstances but it's not great because notice that we're we're burning a whole lot of cycles so if there are no consumers what happens with the producer that's encountering a full buffer is it's busy running release acquire release acquire as fast as it can and it's wasting cpu cycles to do nothing so this is a form of busy waiting okay and so this isn't really going to help us much now you almost you you might also ask well will this work on a single core uh and the answer is well if you think of the idea of trying to acquire a lock when um you know when somebody else has it as you've got to go to sleep then what happens there is we go into the scheduler we talked about last night and in that scheduler at that point we effectively relinquish the lock excuse me effectively relinquish the cpu and at that point we somebody else gets to run which could potentially be the consumer in which case if uh they're ready to run they'll dq and then when we get to run again we'll acquire the lock and enqueue so this is actually going to work on a single core and it's also going to work on a multiple core but this really is wasting a whole bunch of cpu time so this isn't great either um and uh so what else are we going to do so notice that if we actually go to sleep in an acquire we're not wasting cpu the problem with this solution is we're spinning where if there's only the producer a single producer and the buffer is full then we release and acquire and release and acquire we just keep going wasting cycles forever and uh those cycles potentially could be used by some other code that ultimately becomes a consumer which will resolve the producer we call that a busy weight talk about that next time on monday so that's this little busy weight symbol okay we're waiting we're waiting we're waiting we're spinning we're spinning we're wasting cycles okay so we need something else and this is really just indicative of the general problem that locks while they're generally powerful enough to do pretty much anything aren't quite the right high-level api to do what we want so we would like a way to do something like this that lets us do a better job of managing resources than a lock okay and so higher level primitives and locks we're going to talk about a couple of them as we move forward in the next couple of lectures but we can ask ourselves what the right abstraction is for synchronizing threads that share memory now clearly we said that a lock could be used in a way that allows us to share memory under a wide variety of circumstances but you have to admit that this particular spinning code here is not all that intuitive and certainly isn't all that good use of resources so maybe we want something else and we want something as high level as possible where i think of locks as lower level okay and so good uh primitives and practices are going to be very important because the easier the code is to read and understand the more likely you are to have it correct by design um and so this is important okay and it's really hard to find bugs in uh multi-threaded code that shares data and unix you know different variants of unix are pretty stable now but it was very common that um unix systems would just crash every week or so because of concurrency bugs and that was just what people accepted okay so synchronization is a way of coordinating multiple concurrent activities and um we're going to talk uh about in the next several lectures different ways of synchronizing that are a little bit more intuitive and more likely to be correct okay so that leads us to semaphores which is the topic i wanted to get to today in this special segment and if you remember i met i introduced seven fours a bit uh a couple of lectures ago but a semaphore is a kind of generalized lock the term comes from these uh traffic symbols that you see on railways okay and it's the main primitive used in the original unix it's also used in pintos and several other uh operating systems as well and a definition here is that a semaphore has a non-negative integer value and supports two operations one which is uh down or p is the is the uh standard thing to think about which is an atomic operation that waits for the semaphore to become positive and then decrements it by one and notice for instance i said here that it has a non-negative integer value so that could be zero or or higher and so what down or p does is it waits for the semaphore to become positive so if the semaphore is zero and i execute down i wait and that waiting is one where i go to sleep it's not a spin weight or a busy weight okay and then the moment that it becomes non-zero okay or positive it then decrements by one and and exits the down or p operation okay and then up is sort of the opposite of that which is an atomic operation that increments the semi-four by one and if somebody's sleeping on p uh it'll wake them up okay and that wake-up then will try to decrement by one and if they succeed then one thread will get out okay and think of this as a signal operation think of p as a weight operation and p stands for proberen and v for ferrogen which is uh probarion is to test and uh fair hogan is to increment in dutch which is where dijkstra uh named these from okay so semaphores are just like integers except uh well one there's no negative values so they're whole numbers two only operations allowed are p and v so you can't actually read or write the values except initially okay so you set it to an initial value and then your only interface is p and v and the operations are atomic so if you have two p operations on two different threads there's no way for them to decrement below zero so those whatever the implementation is and we haven't gotten implementation yet it will ensure that there's no way for uh the semaphore to ever get below zero and for instance the thread going to sleep on a p won't miss a wake up from a v so it won't be the case that there'll be a thread sleeping with a p operation but the semaphore itself is one or more okay those that uh interface is insured because p and v are atomic now posix actually has a semaphore that gives you the ability to read the value after initialization but technically this is not part of the proper interface okay so proper interface of semaphores have only p and v after you've initialized but if you use the posix versions you can read the value as well so the semaphore as i mentioned from the railway analogy uh here is here's an example of a semaphore initialize the two for resource control so this is going to start looking a little different than just locking so here's a semaphore there's two tracks and a value of two basically says that we're going to only allow two uh trains into the train yard switching yard at once so when the first one comes along it's going to come along the track and execute a p operation on this semaphore taking its initialized value of two down to one and uh we'll go from there so if you notice that first train came along it executed p that succeeded so it got to keep going the next train that comes along will execute p and now the semi-4 is equal to zero but that second one succeeded it's only when the third one comes along and tries to execute the p operation that it gets stopped on p so this train here basically executes p and the p hasn't returned yet okay or the down operation as it said in some some interfaces hasn't happened yet so what would make it happen well when the train exits and executes v then it's going to increment the uh so the v operation is going to increment the semi-4 and then that incrementing will wake up uh somebody sleeping on the p operation at which point they will decrement back to zero and get to go so if we let the train go this guy increments quickly to one then decrements and now we're back to where we were so what's different here is that we have this idea of more uh resources like two here this is basically giving us a way of enforcing the fact that there's only two things that are in this uh rail yard whereas if you think about what a lock is about mutual exclusion that allows only one thing into a critical section okay so this is allowing two or more okay so there's at least two uses of semaphores one is mutual exclusion which is also sometimes called a binary semaphore or a mutex which is really used like a lock okay and that's why if you look at how do you make a lock in posix they actually call it a mutex so a mutex in a lock or a mutual exclusion uh device is essentially a lock okay if i set the initial value to one and then i say i try to do a sema four p on that semaphore the first one that comes through will decrement it to zero and be busy doing the critical section any others that come through will now encounter the fact that the semi4 is equal to zero and won't be able to get through and uh and therefore they will not be able to go forward okay now another use of semaphores is a scheduling constraint so for instance we saw earlier with the train the idea that we had a scheduling constraint of two uh items that could be in the rail yard maximum here for instance if we set the value to zero of the semaphore then we get this idea that we can allow a thread to wait for a signal so if thread one waits for the signal from thread two what happens is thread two will schedule thread one when the event occurs so here we go this is kind of like we set the semaphore to zero and then join basically says well i'm going to try to do a 74p on the semaphore assuming that this starts out at zero that's my initialization then the thread join operation is going to sleep because it's waiting for that semaphore to become non-zero and then as soon as another thread finishes that will increment the semaphore which will take it above zero which will wake up the thread join and we'll get exactly the same behavior as a spread joint okay okay so revisiting the bounded buffer here for a moment what we see is that we have correctness constraints so the consumer has to wait for the producer to fill buffers okay or in the case of thinking about this as a coke machine the uh you know you're a student you go to the coke machine there are no coke bottles in there you gotta wait okay i don't know maybe it's really late so you take a nap in front of the machine until there's somebody to fill the coke machine the producer or the guy bringing the coke bottles has to wait for the consumer to empty the buffer so if they uh the delivery guy shows up and the machine's full in the in the instance of what we're talking about here for a bonded buffer they're forced to wait until somebody buys a bottle of coke and then they can put their another another coke in so we have uh two correctness constraints which are uh about resources the consumer waits for the producer to fill buffers the producer waits for the consumer to empty buffers and then one more constraint which is a mutex constraint to make sure that we have correctness on our cue itself and don't have bad behavior and this is going to be just like a lock and it's going to be needed for the same reason we needed to lock at the root of the red black tree in that earlier example which is for correctness we want to make sure that the queue doesn't get screwed up okay and the reason again i just said this but we need that mutual exclusion is because you know computers are stupid and if you have multiple threads both trying to manipulate the the uh the reader and the writer part of the interface then you're going to get um you're going to get bad inconsistent behavior and there might be other more complicated things in this instance maybe the input puts things into a heap and the output takes the uh the one with the smallest value out of the heap so there's many instances of this bounded buffer that you could think of that are more sophisticated than just fifo all right so general rule of thumb you got to use a separate semaphore for each constraint so we have a semaphore for the full buffer constraint a semaphore for the empty buffer constraint and one for the mutex so that's three semaphores and we're going to start out with no full slots because the machine is empty we're going to start out with a 100 empty slots because the machine is empty right and the mutex we're going to start out with it set to one because uh we're interested in uh using this as a lock or mutual exclusion and so then our code is pretty simplistic and straightforward so the producer comes along and says oh let's first execute a semaphore p on empty slots so what this says is if the number of empty slots is uh zero because they're the machine is full we're gonna sleep here at that semi-four p okay so the producer can't actually add any coke machine bottles to the coke machine if there are no empty slots assuming there were empty slots then what the semaphore p does is it decrements the number of empty slots why because we're about to add another uh we're about to add another coke bottle so there's one less empty slot and then notice that we grab the mutex with a semaphore p and we release it after we're done and that's all to protect this cueing operation we're going to enqueue a coke into the machine or enqueue an item on the buffer okay and why do we have a m4p followed and of 74v because the operation on enqueuing we don't we can't afford to have multiple threads screwing it up okay so this is think of a mutex as a lock and the consumer is kind of the mirror image of this right so the consumer say you're a student grabbing a bottle of coke says that if there are no full slots because the number of full slots is zero this semaphore p is going to go to sleep otherwise if there's more than zero full slots that means there's more than one and more than zero bottles of coke then the 74p operation will decrement the number of full slots exit we have our mutex around the dq operation so we grab the lock by doing a 704p and then we release the lock by doing a sem4v and we correctly do a dq okay and then um finally we when we're done we increment the number of empty slots to tell the producer we need more okay and i forgot saying we increment the number of full slots down here in the producer case okay so think of these uh as critical sections okay or maybe just the in q and dq that are being protected by mutexes okay so that's one use of semaphores then this producer when it puts a bottle of coke in not only does it increment the number of bottles of coke by incrementing full slots but if it turned out that there was a consumer waiting for a coke bottle then this semaphore v on full slots will wake up uh an item that was sleeping on a 74p okay and it could be by the way i'm sure some of you are thinking well what if there are uh five students sleeping on 74p what happens is it might be the case that the symbol for v going from zero to one wakes them all up but then the first thing they're going to try to do when they're awake is decrement the semaphore one of them will get a chance just because of the scheduler to decrement it uh from one to zero and they'll exit 74p and get to go on the rest of them will encounter that the 74p is back to zero already and they'll have to go immediately back to sleep so this uh full slot increment will only wake up one of the sleeping guys if in fact full slots went from zero to one when we did semaphore b and then the flip side of this is this semaphore v on empty slots will wake up the producer if it turns out that there is a producer sleeping on the fact that there aren't any empty slots for the bottles of coke okay so this is there to give you an idea that semaphores are a lot more sophisticated in what they can do and they do they do both mutex operations uh like locking and they all and they do resource operations where you get to track the number of resources and make an action based on that okay so a little discussion about the symmetry uh of this solution so why um do we do semi semi4p on empty buffer and 74v on full buffer for the producer but the consumer does the opposite well that's because the producer is uh waiting when there's an empty buffer and signaling that they've filled a buffer whereas the consumer is waiting when there are no full buffers but signaling when there's a new empty buffer okay so we decrease the number of empty slots we increase the number of occupied slots here we decrease the number of occupied slots and increase the number of empty slots notice by the way i just want to say this that it's not we have two semaphores for either end of the spectrum okay for whether we can add items to uh to the front or not and whether we can remove them from the back or not those two semaphores uh are on opposite ends of the buffer so we need two of them we can't just get by with a single uh semaphore that tells us how many items are in there because uh then we wouldn't be able to sleep on one or the other side okay so we need two one for each side of the buffer the other thing to notice is is the order of these p's important so the producer did do semaphore p on empty slots and then [Music] and then 74p on the mutex and then in q and so on will this matter if i swap these and the answer is yes this can actually cause deadlock okay why is that well if you look the producer comes in executes semaphore p on the mutex okay so it grabs the lock and then it calls and says oh there are no empty slots and so it goes to sleep we've now got a situation where the producer is sleeping while holding the lock which means that if the consumer comes along and tries to add a bottle of coke or tries to take away a bottle of coke excuse me what will happen is it'll execute semaphore p on full slots it'll try to grab the mutex but it can't because the mutex has been grabbed by the producer after just before it went to sleep and so the consumer will be permanently stuck all right and this is a bad deadlock scenario okay and then you could come up with a cycle we'll talk more about deadlock later in the term is the order of the v's important that's no and the reason is that um neither of these uh block in any way what they do is they increment a value and possibly wake somebody up so you can do those in any order okay what if we have two producers and two consumers well if you look back at our solution back here what you'll find is this works for any number of simultaneous producers and consumers and the threads will just go to sleep if there's no space and so this is this particular solution works perfectly well for many producers and many consumers okay especially the one producer one consumer case which we might have started with okay um don't need to change anything so where are we going with synchronization um so in the next monday and the the rest of this particular term we're going to be going to various high-level synchronization primitives using atomic operations um you're gonna see a bunch of hardware to help us uh we're gonna start with load and store being um atomic and then we're going to disable interrupts as a way of getting locking and then we'll talk about using test and set and compare and swap and then we're going to start putting in some higher level primitives and what i mean by that is we already know what locks and semaphores are but we're going to start talking about how do you build them okay and we'll talk also then about monitors and send and receive and so on more sophistication and then we'll talk about shared programs all right so that's all i wanted in this supplement um just wanted to talk to you a little bit more about locking and semaphores we'll repeat some of this material on monday but i just wanted to give you a little bit of an extra heads up here in case you were interested in learning something more about semaphores before your design dock was due all right have a great rest of your day and we'll see you on monday thank you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_19_Filesystems_1_Performance_Cont_Queueing_Theory_Filesystem_Design.txt | welcome back everybody uh to cs162 we are uh on lecture 19 talking about file systems and uh hard to believe but uh we're on the the final few lectures of the class i think we're ending potentially on lecture 26 so getting close there if you remember from last time we were talking about devices and among other things we talked about spinning storage and gave you some amazing stats about modern disk drives i'll show you a couple of these in a moment but basically the way to think about a disk drive is it's a series of platters that are uh double-sided so there's storage on both sides and uh there's a single head uh assembly typically with a um a um an actual read write arm on both both sides for one for each platter and that head moves in and out as a group and given the current position of the head if you let the platters spin which is what they do it traces out a path and on a single surface we call that a track and if we take all of the tracks that are in uh traced out simultaneously by the head we end up with a cylinder all right and we talked about that and the in a simple model for measuring how long it takes to get something off of the disk includes at least these three items seek time rotational latency and transfer time and uh the seek time is the time basically to move the head in or out and that's something of order of four milliseconds these days the rotational latency is the time for uh the resulting sector that holds your data to rotate under the head and then finally the transfer time is the time to actually pull a block of data off uh off the disk now there's a good question here about is there only ever one head now just to be clear usually the head is the thing touching the surface there's a head assembly and usually there's only one of them and the reason for that even though it seems like it would make sense to be able to independently read the different platters is that discs are a commodity item and that would be way too expensive and the head is one of the most expensive parts of the assembly so a complete model of how long it takes to pull something on the disk or write something to the disk is that a request spend some time in a queue we'll say a lot more about this today and then it goes through the controller and then once it's in the controller then it gets fed out to the actual physical disk at which point we have the seek plus rotational plus transfer time and remember by the way the rotational latency probabilistically we say it's half of rotation because on average it takes a half a rotation to get the data underneath the head any other questions here okay we showed you a picture or two of the inside of a disc last time as well so if you missed that lecture you can go back and take a look um here were some typical numbers so um i pulled out uh commodity seagate three and a half inch discs are now up to 18 terabytes nine platters more than a terabit per square inch on each surface so that's pretty amazing we have perpendicular recording domains and so the the magnetization that represents a one or zero actually goes into the surface typically there's helium uh inside there to help reduce the friction of the the disk spinning around um the seek time is typically in a four to six millisecond range um although a good operating system with uh good locality will get this down to a third of that time on average this particular time that's uh specked out is the average time to go from any track to any other track all right the rotational latency for uh laptop or desktop disks is in the 3600 to 7200 rpm which is somewhere between six milliseconds per rotation or eight milliseconds for rotation for the faster one server disks can get to be fifteen thousand rpm um and so the latency is is less uh controller time depends on the controller hardware transfer time typically 50 to 250 mega bytes per second notice the capital b um and it depends on a lot of things like what size are you transferring so sectors which are the minimum uh chunk of data that can go on and off the disk can be um 512 bytes or up to uh four kilobytes on modern disks rotational speed of course we just said can vary from 3600 to 15 000 rpm the density of bits per track the diameter and also where you are so if you're on the outside um the disc is surface is going by the heads faster than on the inside and so you can read the bits quicker on the outside um okay so pretty amazing um the other thing that we had talked about was we were starting to talk about the overall performance for an i o path and that performance really goes from the user through the queue through the controller through the i o device and uh there could be many metrics that you might worry about like response time which is the time from when you submit a request to when you get the response back throughput which could be the time how many of these requests per unit time can you get through the system things that contribute to latency are the software paths which are green here which can be loosely modeled by cues throughout the operating system are the are hard to characterize in general so we're going to have to come up with sort of a probabilistic way of thinking about those the controller and the device itself that behavior is a little more easily characterized and depends on the the actual device itself but the queuing adds some really interesting behavior here so there's this non-linear curve that starts out with a fairly low change in response time with respect to throughput and then as you get higher and closer to the 100 percent mark which is really the point at which your utilization is uh the maximum the disk can handle this response time kind of goes through the roof and uh we'll see a little bit about where that comes from in this lecture okay so now to pick up where we left off last time unless anybody had some other uh device related questions we talked a lot about ssds as well as spinning storage last time so um so let's start talking a little bit about performance of a device in general and we're going to call this a server so for instance here this yellow i o device would be a server or the combination a controller an i o device would be a server in this particular view of the world and so if we assume that we have uh some amount of time call it l that represents a complete uh service of something then we could have several of these one after another and assuming that the device takes time l to service a request and we put them right after each other so there's really no spacing between submitting the next request after the first one's done we can think of this as a deterministic server where the deterministic part is that it's always of time l and um the maximum number of uh service requests per unit time is just one over l okay because that's kind of the the best we could do if we put them end to end as tightly as possible and just to give you some numbers for instance if l is 10 milliseconds then the bandwidth of number of l's we can handle is about a hundred operations per second that's just one over ten milliseconds um on the other hand if l is two years then the bandwidth might be 2.5 ops per year et cetera okay now this applies this idea applies to a processor a disk drive a person a ta what have you it applies to getting burgers at mcdonald's you know each one of these is the amount of time it takes to get a burger and you know you can compute the maximum number of burgers that can be pulled out of mcdonald's for instance okay we'll get we'll get back to mcdonald's in a moment um so the we could take that l which is a total operation and we could divide it into a series of parts like say three equal parts and then um we could imagine that those three equal parts are actually handled by three different stages of some device or some pipeline what have you so this should sound a little bit like 61c and so in that instance here is our l which is spread over these three things but since we're pipelining now notice what happens we have uh the blue part the gray part and the green part and so um after you finish the blue part of the first request and it's on to the gray part of the first request then we can get the blue part of the second request and so on okay and that's going to overall allow us to do more things per unit of time so it's going to up our throughput okay and so for instance uh if you have a pipeline server like this with k stages and the task the total task length is l then we actually end up with time l over k per stage and the rate is k over l so again we had at l equal 10 milliseconds but now if we can divide it into say four pieces um then the bandwidth might be 400 ops per second or if l is two years and k is two then our bandwidth would be one op per year okay and so this is just noticing the fact that when we pipeline we can get more items per unit time shoved down that pipeline and of course all of the things that uh we talked about in 61c in that if these are not all equally if all of these pieces aren't equally the same size then you're going to get bottlenecked by the uh the small one okay and so that's going to be a problem um and so let's or actually excuse me bottleneck by the large one the one that has takes the most time okay now example system pipelines are everywhere so in 61c you basically talked about the processor pipeline here you could imagine that for instance you have a bunch of user processes they make a syscall they put things into the file system cues them up that's a pipeline doing file operations that then leads to disk operations which then lead to disk motion okay or in communication typically you've got a whole bunch of cues throughout the network and um those cues all work for each other and you have a lot of routers and the routers are all working in parallel and so ideally if you're communicating say between berkeley and beijing you have a nice clean path with a lot of packets in the pipeline from point a to point b and they're all moving their way along okay we'll talk about that level of pipelining when we talk more about networking in a week or so so anything with cues between an operational process behaves roughly pipeline like and so that analysis we were talking about applies now the important difference here is that initiations are decoupled from processing so that means that um the reason i put a cue here in the first place is so that the thing producing uh the requests is decoupled from the thing servicing the requests and this is extremely important in general because request production is often very bursty okay and this is certainly true with file system calls it's certainly true with the network it's certainly true with a number of other things and so really we're going to want to be putting these cues in here to observe those bursts and that synchronous and deterministic model that i you know roughly gave you here is is not reality okay the reality is that we're going to have burstiness and so a lot of things are going to arrive quickly not at a regular rate okay so another thing we can do which we haven't talked about is we can increase our parallelism not by pipelining but rather by putting a bunch of servers in so uh that has a similar effect so in the case of uh these requests taking time l and not being able to be split up if we put say three or four n different servers k different servers excuse me here then we can get k times uh the number of things operating simultaneously and so notice we get exactly the same numbers here latency is 10 milliseconds k is four we have four different servers then we could get 400 ops per second etc okay so there are uh the op there is the option to up your bandwidth by adding more servers or up your bandwidth by pipelining those two things are kind of duals of each other and depends on circumstances as to which ones are good now so parallelism clearly comes into play for instance here when we have lots of individual disk drives it'd be great if certain things can be done in parallel and uh in a lecture or so actually a couple lectures from now we're going to talk about things like putting a log in to give us um better perform to give us better durability when things crash and it'd be great if we could have a separate disk drive to handle the log independent of the file system that'll give us higher performance clearly there's a huge amount of parallelism in the network and in the cloud and so when you submit a bunch of people submit queries they go throughout the network they go to different parts of the cloud and therefore there's a huge amount of parallelism as well and that leads to all sorts of interesting behavior and we'll talk about network systems uh in some detail in the last few lectures so let's put together a little bit of a simple performance model so here we have a hose okay and we have the latency l which is the time for operation so how long does it take to flow all the way through the system that's l so the latency is from the point where a little particle of water comes at the top until it goes through several times and comes out the bottom that's l bandwidth is sort of how many ops per second come into that hose or out of this pipe and that would be operations per second um for instance uh or gallons per minute etc etc and if b is two gallons per second and l is three seconds then how much water is in this system in the in the actual hose can anybody figure that out yep six gallons right why because two times three is six and you know you're over the time that you've got those three seconds you keep dumping water in and so over that three seconds you get two times three seconds worth of water in the hose okay and so that's a pretty simple analogy hopefully everybody's got that's and that turns out it's going to be something called little's law which is going to be helpful for us to be able to get okay you know we can also so here we're talking about kind of uh water which is uh you know dividable into as many little pieces as you like we could also talk about um chunks of work so here's a case where each one of these little circles represents uh some fixed amount of work and so l is the time for us to get through the whole system now and if the bandwidth is two operations per second are coming into this system and l is three seconds once again we'll have six operations one two three four five six in the system at any given time okay same idea but now we're looking at things that are quantized uh rather than continuous flow like water okay so none of this is rocket science so far okay so this is not intended to be complicated but it's just intended to give you a way to think about some of these flow uh ways of looking at things okay now little's law is a way to define that okay and so little's law talks about a system which is this cloud arrivals come in at a certain rate and now instead of bandwidth which is sort of maybe a more normal thing for uh you all to think about we're gonna talk about uh lambda which is a rate of things arriving okay and so just think of this as a different symbol for b there's a length of time you're in the system and there's the number of things that are in the system at any time so things come in they're in the system they depart okay and in any stable system stable meaning that n doesn't grow without bound and it doesn't shrink down to zero on average the arrival rate and the departure rate are equal to each other okay so lambda is arrivals per unit time departures are departures per unit time on average the same number of things come in as go out so that this is a this is a stable system and when we talk about this probabilistically what we're saying is on average n is stable it's neither growing or departing and so we're we're not limiting ourselves to deterministic systems where n is always exactly the same amount but on average it's stable okay and so little's law basically says that the number of things in a system is equal to the bandwidth times the latency or n is equal to lambda times l okay and this is uh universally applicable no matter what the probability distributions of of lambda r you can use this and no matter what the distributions of l's are so maybe not everything takes l time to go through the system then uh you can multiply it out and figure out how many jobs there are and sometimes i go through a full proof of this uh probabilistically i decided not to do that tonight but um if you look at my slides from last year you can see our last term you can see um see that proof it now the the way to think about this is a you could look at the hose analogy that i just showed you right the other is this is i like to think of this as the mcdonald's uh law okay and so imagine that what happens is a huge bus of uh people shows up at a mcdonald's they all get out and um and they form a line okay and so the bus causes a certain rate of people to come in that's lambda and there's a certain line that goes in the door and to the front counter okay and if you uh hit the door if you come to the door and you look and you see so many people are in front of you and you wait in line you wait in line you wait in line and on average the same number of people are coming after you if you looked from the door in and then you got to the counter and you turned around and you looked back there ought to be the same number of people there because it's a stable system and so the way to think of that is you take the speed at which they're coming through the door times how long you waited and that tells you how many people ought to be in the line all right and that's so that's the the mcdonald's uh you know big mac equation here little's law all right questions okay so the thing about this law is you can apply this to any number of things you can draw a box around it call something a system it could be the cues it could be the processing stages it could be whatever you choose to draw your box around or your cloud around um arrivals times average latency average arrival time speed times latency gives you the average number of jobs through the system l is the time it takes from when an arrival comes to the system to when it departs okay so again in the in the mcdonald's analogy you you hit you come to the door you look and from the point at the door until you get to the counter that's l all right and if you turn around and look back n is the number of people behind you and it's the same hopefully as the number of people that were in front of you when you got to the door okay good now notice l has something to do with how happy we are or how annoyed we are right if l is really long and it took us a really long time to get our hamburger we might be annoyed um if l is short we might be happy and so l is that service time or that uh that we're interested in how long did it actually take for us from the point at which we submitted our request to when we got our hamburger or we got our disc uh satisfied that's l okay and we're kind of interested in keeping l as short as possible obviously all right any other questions ah why should we expect the system to be stable stable that's a good question the reason we expect the system to be stable is because if it's not stable the the math is much messier but in in reality so there is a queuing theory which we're going to talk about which has to do with stable systems and in a stable system if you can come up with lambda and departure and a service rate which we'll talk about and you can then compute assuming that things are arriving at a rate lambda you can compute something about l okay if you're talking about what happens when the system first turns on and starts up or maybe the buses stop arriving at five at night and the system drains those transient analyses are much more complicated and that's a that's a different queueing theory class okay so that's complicated um so this has to do uh yeah so this is related to e120 system stability okay bounded input leads to bounded output um but obviously the other thing that's of issue here is um the the type of queuing we're going to talk about we're going to not put a bound on the cues to start with because the math is a lot simpler okay so if you want to have some really interesting discussion about queuing theory there are several classes in on the ee side that can do it much more deep deeply what i want to do is give you enough to get back to the envelope calculations all right so all right now um let's talk briefly for administrivia midterm two we're still grading it um seems like people thought it was long but maybe easier than midterm one i hope so we mostly had people uh complying with the uh screen sharing um if you didn't we'll probably be getting back to you because that was definitely a requirement but uh we're hoping i think to have the grading done by the end of the week maybe sooner um i know that they're well on the way to being to uh being through the grading so that'll be good um the other thing is i didn't put this on the administrivia but there is a survey out for midterm surveys so please uh give us your thoughts on how the course is going we're roughly a third of the way through i mean two-thirds of the way through so uh let us know and we'll see what we can do to help make the end of the class easy as easy and pleasant as it was at the beginning of the class all right the other thing of course that's really important is tomorrow vote if you have the chance okay it's one of the most important things you can do if you're allowed don't miss the opportunity i know it sounds silly but people often say that if you don't vote you don't get a chance to complain about how bad things are i would say that's true and this my comment here has nothing to do with what you vote for or who you vote for that's totally up to you but it's important that if you have the option to exercise your chance to vote so uh tomorrow is it and uh and then we get to see i'm not sure what's gonna happen tomorrow i'm a little uh worried about it hopefully things will go smoothly we'll find out and yes take care of your mental health as the results come in all right share them share the results with somebody else all right i know that people are talking about actually having vote watching parties this time so they're not by themselves when the results come in i know that's going to be in my household okay i don't really have any other administrivia for folks tonight unless there were any questions our last midterm is uh coming up in the beginning of december so we have a tiny bit of breathing room and uh project two is almost done okay all righty yeah i got the correction all right moving forward so let's talk about uh a simple performance model so um again we have request rate lambda coming in now we're going with the uh cueing theory terminology we have a queuing delay which is how long things are in the queue and then the operation time t which is the time to get uh something satisfied and then um we can consider the queuing delay plus the operation time as l that's one of our options there are many other ways to draw l uh to to and um really what we've done here is we put the cloud around uh around both the queue and the server in this case and so this uh the spinning wheel could be an example of the disk for instance okay and the maximum service rate which is uh how many items we can get through here per unit time is a property basically of the system as a whole which is the bottleneck and so one of the things that we may need to look at to figure out what's the maximum rate we can serve things is what is the bottleneck okay and then once we know what the maximum rate that we could come up with which by the way if you have a bottleneck that slows things down the um mu max is going to be lower than it would be otherwise right so bottlenecks tend to lower your your maximum rate we could talk about a utilization row which is lambda over umax so if you think about this this is really just saying if i have a maximum utilization rate and i have lambda coming in i um rho is a number that varies from zero to one which says sort of what total fraction of my maximum service can i handle or am i trying to handle right now okay so if if lambda's bigger then mu then i got a problem okay so this utilization here is a number that has to be less than one so this is the correct ordering for the question in the chat okay now if you think about it why is that so lambda might be something like one hamburger per second mew might be a maximum of two hamburgers per second that would be the utilization is half of the hamburger production uh possibilities there all right good now if what happens if uh row is bigger than one yeah requests start piling up right so um in fact row bigger than one in a steady state environment is really an unbounded and undefined situation okay so what we're dealing with in this uh analysis that we're talking about here is is the utilization is never allowed to be greater than one in fact the um the queuing theory uh equations that we're going to look at in a little bit have this behavior that they blow up when row gets to one so as rho gets closer and closer to one the the q is going to get bigger and bigger the latency is going to get bigger and bigger okay everybody with me on that good now how does service rate vary with the request rate so if you look here um umax mu max is basically the maximum number of items per unit time that i can handle but if i ask for less i'm not going to i'm not going to handle as many right so let's just think assume for a moment that again mu max is two hamburgers per second and i only ask for one hamburger per second i'll look up on this graph and what i'll see is oh i'm only asking for one hamburger a second so the actual service rate that i go for is going to be one hamburger per second okay because i'm not i'm not making use of all my capacity of course as i get up to two hamburgers per second if that's the maximum that i can get out of the system what happens if i ask for three hamburgers per second well that's in the point at which things are starting to build up and i'm certainly not going to get any more than two hamburgers a second okay so this uh this break point here represents a very crude model of what happens when you ask for more than you can get and in reality if you were to actually look at what the service rate is it's going to be some smooth function of this to the point that we're probably never going to quite get the full maximum because of various overheads in the system and we could try requesting much more than the umax but we're going to just build up our queues and we're not going to get any more out of the system okay everybody with me now so a couple of related questions might be so here we have our queuing delay and our service rate for instance what determines mu max and what about internal cues so when i said queuing delay here d i sort of implied it was one q but there might be lots of cues in the system okay and so one of the things we need to figure out mu max is we need to do a bottleneck analysis and so if we take a look at a pipeline situation that we were talking about earlier remember we had each request requires a blue a gray and a green what that could look like in our overall system is there's a blue server a gray server and a green server they each have cues and they feed into each other okay so this is our pipeline and it's possible if we look at this if they're all of equal time so these are all equal weight then we could come up with a service rate that represents one over you know what one of these little chunks are which is let's say l over three or something now unfortunately uh it may be that each of these stages aren't equally balanced and so somebody has the slower mu max okay and they're going to end up limiting the rate so if you have mu max for instance the third one which is green is the slow one then what's going to happen is the cues behind it and everything else behind it are going to build up and so you could view this really as a full system with one cue representing everything behind it and a service rate of mumax number three and that's the system we're going to analyze okay and so that's the bottleneck analysis where you figure out what the bottleneck is now if the gray one where the bottleneck what's going to happen is things are going to come out of here slower than they can be handled and so these cues aren't this queue isn't going to build up the cues behind it will okay and so the bottleneck analysis you have to figure out what the bottleneck is and use that to figure out what mu max is all right and so really once we found the bottleneck we can think of this in this other simpler way okay so each stage has its own q and maximum service rate once we've decided the green one is the slow one then the bottleneck stage basically dictates the maximum service max and we'll look at this as a single queue with a server that has mu max number three all right questions now so for instance let's look at something that you uh we talked about earlier in the term here we have a bunch of threads suppose there are p of them and they're all trying to grab a lock and that lock has some service time which maybe requires going into the kernel and doing something coming back out and so what happens is the locking ends up serializing us on the locking mechanism okay so um there's a question here let me back up here for a second so i didn't say in this uh example that these are necessarily greater than lambda all i said is that mu max 3 is slower than mu max 2 and mu max 1. hopefully that's hopefully that was clear so we're basically we're coming up with the service side of this situation not the request side the request side is still lambda okay now if it turns out that lambda is greater than mu max three then we're in trouble okay so that's maybe that's why you were thinking thinking that all right so um so back to this example so um this is kind of an amdahl's law thing right so we got all this parallelism but the uh the serial part is causing us trouble um the other way to look at this is basically that we have x seconds in the critical section and so we have p threads times x seconds the rate is 1 over x ops per second it doesn't matter how many cores we've got so this could be a 52 core multi-core processor doesn't matter because all of these threads are drawn to a halt while they're trying to grab this lock and so that's why it's an amdahl's law kind of thing but my rate is one over x ops per second okay so this is certainly an example we can think about here mu max is one over x in this case okay and the threads get queued up there and um if we have more threads coming in then uh at a rate faster than one over x then we know that the q is going to build up without bound and we're never going to make it okay so that that analysis is is uh one that's hopefully familiar from earlier in the term but uh you know we're gonna move this on we're gonna talk about devices as well so the other question we've been looking at here is so mu max is the service rate of the bottleneck stage and so we can think of as i said that we really only have a single mumax server and a queue and that basically is a good model for a bunch of queues but by modeling over the only the um bottleneck stage okay so the tank here represents the queue of the bottleneck stage including cues of all the previous stages in case of back pressure basically what happens is when cues build up they sort of back up to the previous and the previous and the previous and if you were to take all of those cues behind the bottleneck queue that's kind of what this tank is representing okay that's the big q now it's useful to apply this model to all sorts of things we can apply it to the bottleneck stage we can apply it to the entire system up to and including the bottleneck stage or the entire system there's many different ways of drawing boxes and saying well what's the cue in this scenario what's the bottleneck stage okay so why do the so the cues behind the bottleneck stage um are gonna back up because uh the bottleneck stage well it depends okay the question is let me let me um restate the question here so why do the queues behind the bottleneck stage q backup two the answer is they do that only if the queues are finite in size and so behind the bottleneck stage when that queue fills up it's going to prevent anything further from coming out with any of the previous servers which are then going to back up and so on okay so that would be true if each queue had a maximum capacity okay which in reality they usually do um and so let's talk about latency for a second so the total latency is queuing time plus service time so for the this is again the mcdonald's analogy right here's the front door okay which you go through the queue you get to the debt the uh checkout counter you get your hamburger uh however long that takes the process and you exit that's the total latency or service time okay and the service time depends on all sorts of the underlying operations so if we're processing and this is a cpu stage it could depend on how much computations involved if it's an io stage it could depend on the characteristics of the hardware like if it's a disk it could depend on you know the seek time plus rotational latency uh plus the uh bandwidth coming off the disk okay so there are many different types of servers we could worry about here they all they all roughly equivalent to this model and so what about this queuing time so we still haven't figured out how long are things in the queue now if we were to ignore the previous discussion about cues backing up and instead allow this queue to be arbitrarily large then uh it's kind of an interesting question of how big is the queue on average how many items are in the queue and that's something where we need to pull in some cueing theory now the cueing theory um i'm going to give you in this class is going to be something that you can just apply it isn't going to be um i'm not going to really derive it although there are some references that i'm going to give you at the end which show the derivations and they're pretty straightforward because this is a simple queuing theory but um so let's take a look at our systems performance model we have now so we have lambda is uh items per unit time coming in we have queueing delay which is the time you sit in the queue we have operational time which is the time to actually do the operation or service time and then we have the service rate u mu excuse me and mu max is going to be the one that we're really talking about because that's the bottleneck okay and again utilization is rho equals lambda over mu max and we've already said that rho better not get to be bigger than one or we have some serious problems and in fact in the model you'll see in a bit if rho equals one we also have sort of infinite latency so that's really big okay so when will the queue start to fill well the queue is going to start to fill when uh we're busy servicing something and something else comes in right so some questions about queuing we could say well what happens when the request rate exceeds the maximum service rate we already did that that's q is going to fill up short bursts can be absorbed by the q if on average lambda is less than uh mu okay and so we don't actually require that lambda is always um is always smaller than mu max what we say is on average lambda is less than on average mu max okay so mu max actually we can start talking about a probabilistic service time in fact we will in a bit and a probabilistic entry time or entry speed and those two things entry rate service rate uh can be probabilistic averages and as long as the average lambda is less than the average mu then we're good okay and it's only if we have prolonged lambda greater than mu that we have problems okay so let's talk about a simple deterministic world here so a deterministic world uh which unfortunately we don't live in these days is as follows we have a q for arrivals come into the queue and we have t sub q items perhaps in the in the total excuse me we have a total of t sub q time in the q and then we have the service time t sub s and here's some numbers over the left you can see here so um let's suppose in the deterministic world somebody comes in every t sub a every t sub a without fail and with no probabilistic variation so now we can say that lambda which is the rate that people are coming in is one over t sub a and the service time t sub s mu is uh well it's k over t sub s if there's k servers there okay and then finally the total queuing time l is equal to uh t sub q plus t sub s so if i want to say what's my total time to get my hamburger it's the time in the queue plus the time to be served and that's how long i'm in the mcdonald's okay now if we take a look here what do we got so if we have um an item comes in every t sub a okay so this is uh when this is what mcdonald's looks like uh like maybe 2 30 or something right in the afternoon when nobody's coming in so a new person comes in every t sub a it takes you um you spend a very short time in the queue in fact you're probably it's the time to walk from the door to the counter and then it takes some service time to get your hamburger and notice that um the important thing here is this service time which tells me what my service rate is one over my maximum service rate one over t sub s is shorter than t sub a so we're making sure that the time it takes to get the hamburger uh you're completely done by the time the next one's ready to go okay otherwise you start building up the cue okay and so and since we're pipelining so the time sitting in the q versus the service time that's okay as long as t sub s over t sub a is less than one in this instance okay and this is totally deterministic there's no probabilities here at all and so in a deterministic world we have rho which is the utilization basically goes from zero to one okay which is lambda over mu which is t s over t sub a okay looking back here notice t sub s over t sub a is going to be our utilization okay and if we look here that um if our utilization is from zero to one our delivered throughput which is the maximum we can get against one is goes from zero to one so what do i mean by that so our delivered throughput our maximum throughput here is uh one item every t sub s okay and so if we shove a new item in every t sub s we would end up with a delivered throughput of one and a utilization of one and so at this point here this is the point in which everything's coming in at the maximum rate it can without building up the queue okay and then we've got our saturation we saw you earlier and the point at which your utilization gets bigger than one now you're building your queue up and basically people are out the door and down the street and around the block okay and in this deterministic world what happens is if you look at cueing delay as a function of time uh if you um basically build things up to too large then uh the amount of time it takes is uh this should be actually utilization on this access sorry about that you build things up once you get past the large queuing delay then you basically start growing without bound as to how long it takes to get your hamburger now let's look at what happens with bursts okay so the nice thing about deterministic is it's very easy to understand right you can clearly see that once you get too many of these t sub s's coming in so that you're they're coming in faster than this um then the rate excuse me you have too many items coming in so that they're coming in faster than the rate then you've got a problem and you can no longer satisfy your um without building up your cue okay so if we look in a bursty world we got a different problem okay so in the bursty world notice the arrivals are coming in servers is handling things but now the time between arrivals is going to be random okay and so there's going to be a random variable and so people are going to arrive you know in a second and then in three seconds and then in two seconds there's gonna be variation of how long they wait and now things look a little different okay so look what happens here somebody arrives they get through the queue and now the hamburger is being cooked up and they're they're uh you know they're waiting for the hamburger but meanwhile somebody else comes in and now they came in uh at this point okay and right after the blue one came in so the blue one's being served the white one came in and now the white one can't be served why is that well because the blue one's being served so all of this time from when the white one came in to when uh the blue one is done the white one's waiting and meanwhile an orange one came in now the orange one's waiting and a light blue one came in and now the light blue one's waiting and they're all waiting for the original person to get their hamburger okay and now once the original person gets their hamburger now the white one gets their hamburger okay and that's going to take in a time to make hamburger and then finally the orange one gets and then the blue one gets their hamburger and there might be some uh space here where nobody's coming in and then we might start over again and notice in this scenario the average number of uh customers per unit time could be exactly the same as the deterministic one except we have some burstiness where a bunch of them come in and then we have uh empty spots where nobody comes in and if you notice what happens here the blue uh person is very happy because they get their hamburger in their normal time but white is not so happy because white waits from the point they came in until a much longer period to get their hamburger because they're sitting in the queue orange is even worse right orange comes in and they have to wait until here to get their hamburger and light blue has to wait until there to get their hamburger so light blue is really waiting a long time okay and so just the addition of burstiness even if with the same average time t sub a okay average we end up with uh a hugely increased waiting time okay questions just randomness on the input okay so everybody see everybody see how it is that white here comes right away after the la the blue one came in but now they're sitting in the queue all this time and then they get to be served and then they're done and so white is basically waiting from the point they come in the door to here before they have their hamburger and blue just waited a short time so the average waiting time is much longer than in the deterministic case yes even though the average person per unit time even though lambda is the same in the two cases when there's burstiness the arrivals there the average waiting time goes up okay yes pretty strange right so randomness causes all sorts of weirdness okay now of course the other thing is we'll talk about average waiting time which is really you know blues time from the moment they came into when they have their hamburger versus whites until they have their hamburger versus orange they have their hamburger averaged over the whole system that's going to be a number that we're going to compute in a in a moment all right now so requests arrive in a burst so the queue actually is is fills up whereas in this previous case in the deterministic case with all of the parameters the same there's never anybody in the queue right somebody comes in they they their queue is really kind of a null queue because they have to walk to the counter they get their hamburger there's never anybody waiting ever okay so that's a case where the cube basically is is not if it's not filling up at all whereas in the bursty case we actually fill up the queue here you can look at the queue basically has uh depth three at this point when the when the light blue one person has come in you now have white orange and light blue sitting in line and now you only have orange and blue and now you have blue and now you have nobody okay good i don't want to belabor that point so same average arrival time but almost all the requests experience large and large queuing delays even though the average utilization's low so on average we're not necessarily using uh all of our hamburger time that we could but people coming in in bursts means they end up waiting in line and you know if you think about this this is really your common experience coming in when everybody shows up at noon uh at a pete's coffee you have that queuing problem right and the queuing problem is because of the burstiness of the arrival now how do you model burstiness of arrival so the time between arrivals is now a random variable and there is a lot of uh elegant math that we're not going to go into in great detail but one of my favorites is the thing uh called a memoryless distribution okay and so this is um the uh what is the probability that the time between now between the uh the first guy that arrived and the next guy that arrived is a given value and it has an exponential curve that looks like this in fact the probability distribution is uh is lambda e to the minus lambda x and that's what i plotted here okay lambda in this instance is the arrival rate okay and this shows you the probability distribution of how long it takes to uh between the first guy and the second guy and the question is why do they call this memoryless well the reason they call this memory list is if you remember your probability if i were to say well i've already been waiting for two units of time what's my conditional probability given i've already waited for two so i cut off the first two and i rescale everything and what you see is it's exactly the same curve so the reason they call that memory list is the amount of time that you've waited says absolutely nothing about how long you're gonna wait uh and that's uh just like buses in berkeley right you've waited you've waited for an hour and that tells you nothing about what how much more time you're going to wait because it's a memoryless distribution all right and so the mean inter-arrival time which is the amount of time between each arrival is one over lambda okay there's lots of short arrival intervals okay and there's uh many there's a lot of short ones and there's a few really long ones and the tail is really long okay so i understand the buses in socal are better or worse than in berkeley what's the implication there worse so soquel buses are dead well all right i guess i guess in the memoryless model we're assuming that the bus will eventually come it may be days later but at least it'll show up um so anyway so this here's what's cool about memoryless distributions if you don't know anything about the probability distribution for arrivals but you know uh that there's a a bunch of factors that all feed together to generate the random variable then you can you can often model it as a memoryless uh distribution without knowing anything else okay so for instance if you have uh a bunch of here's how we use it often you have a bunch of processes that are all making disk requests and they're all random about it but they're not correlated in any way and they all submit at random times and what have you but you know if you look overall at the the rate at which requests are submitted there's a rate there so many requests per second then you could figure out what that request per second is and then model it as a memoryless distribution and it gets you somewhere it may or may not be perfect but at least it's a start okay and so people often use memoryless distributions to model input distributions when the only thing they have is the rate of arrival okay but notice the thing to realize is that lots of short uh burstiness at for short events and then some really long ones the long tails okay and um so the simple performance model here the q um we have a lambda in the rate out and the q basically grows at rate mu minus lambda if you think about it that makes sense because we have a rate in and a rate out and when the rate in is faster than the rate out then the q is growing on average mu minus lambda okay all right now let's very quickly uh remind you of some things and then we'll put up the queueing result so um one thing to remember is uh if we have a distribution of service times so think of this as the disc how long does it take to get something on the disk we can talk about a couple of things so there's the average or mean right that's the sum of uh the various um items at t times t and that's the mean okay probability and that's the center point and you can think of this as exams right and then there's the variance or the standard deviation squared okay and that represents the um the amount of time or the how far off the distribution goes from the mean so you could have a peak where everything's the mean then the standard deviation be zero otherwise it tells you about the spread okay okay and those two items hopefully are very familiar with you from exams and everything right what's the average of the exam what's the standard deviation this thing about uh sigma squared standard deviation squared is called the variance that's a little easier to compute so usually you compute the standard the sigma squared and then you take the square root to get the mean let's get the standard deviation and then the um squared coefficient of variance is an interesting one which i'm sure you probably have never seen and that's where you take the variance divided by the mean squared and that's a unitless number all right and the thing that's funny about c is you can learn a lot no matter how complicated this distribution is you can learn a lot about it based on c without knowing anything else okay now let me pause here for a moment because i'm assuming this is mostly review for you guys but are there any questions on this simple thing right so if you look at what's on the x-axis is how long you're waiting for the bus for instance each of these little uh slices underneath is a probability that you're going to wait this amount of time this amount of time this amount of time this amount of time and there's a way to compute the mean which is here uh and the way to keep the compute the standard deviation and that tells you both what's the average amount of time you wait and what's the um spread okay and the key thing about memoryless distributions is that their exponential shape okay means that you don't learn anything after you know how long you've waited okay p of t is the probability that you wait time t so if you were to look at this as a curve that everything sums to one then p of pick a t like here's t this is that you've waited two hours the error the area under that curve kind of is p of t or the or the height of that curve is p of t does that help okay so um you take an integral in the continuous case and you'd use that these things that are shown as sums would be integrals in the continuous case yes exactly correct now this memoryless distribution actually it turns out c in that case is one okay because uh the variance and the square of the mean are equal to each other and c is one so often times when you see a c of one you uh you actually end up with something that's behaving like a memoryless distribution even all the other weird things that don't look like this curve and have a c of one will often times behave in a queuing standpoint as if it were memoryless which is kind of interesting um the past in when c is one the past says nothing about the future uh when there's no variance which is deterministic c is zero why is that well that's because the variance is zero and therefore c is zero there's another thing is if you have sql 1.5 for instance typical disks have a c equal 1.5 for instance that's a a situation where the variance is a little wider than memoryless and so you end up with a slightly different distribution but that's typically what people see on disks okay so to finish this off now um if you think about queuing theory we've been leading up to this anyway but you can imagine a queuing system where you draw a box around a queue and a server you have arrivals and departures the arrivals on average equal the departures on average otherwise the system blows up uh the rivals have a probabilistic distribution the service times have a probabilistic distribution and what we're going to do is we're going to try to figure out how big the q is on average okay and so for instance with little law little's law applied to this we can say that if we know the amount of time i wait in the queue t sub q times lambda which is the rate at which things come in that'll tell me how long the q is so if we could say compute one of these guys we could get the other one pretty easily so uh perhaps we're interested in seeing whether we could compute t sub q and then we can figure out the length of the q later okay just by using little's law all right so some results so assumptions are first of all the system's in equilibrium we talked about that earlier there's no limit to the queue time between successful arrivals is random and memoryless for instance okay on the input so we're gonna we're going to go back to our notion that memoryless appear represents a situation where you have a bunch of random things that are uncorrelated that all summed together and are coming in we'll call it memoryless with some lambda and what we're going to do is our cueing theory uh is going to assume that okay and so the departures are going to be um an arbitrary distribution but the input's going to be memoryless okay so if you look here we have an arrival rate lambda which is a memoryless uh distribution and a service rate which could be an arbitrary distribution so like a disk drive and mu is going to be one over the time to service okay and that's just t sub s is the average time to service the customer c is going to be the squared coefficient of variance on the server and so in a typical problem you're going to get a couple of these variables and you'll have to compute the other one so oftentimes for instance you might have to figure out what c is and usually you have a very clear way to figure that out like this is a deterministic server time where it always takes exactly the same amount of time to service that would be c equals zero or this is a memoryless service time then you know c equal one or it's something else and we tell you what c is okay um so usually you'll be able to do that pretty easily notice that if you know that the average time to serve something you can take one over that to get mu or if you know mu you could take a one over that to get the average service time so these are related to each other and so typically getting three of these variables you can get the other two okay and so for instance a memoryless service distribution it's often called an mm1q this is where not only is the input memory list but the output server is memoryless as well and so you would say that in that mm1q where c equals one then the time in the q is actually rho over 1 minus rho times the service time so if your disk on average took a second and you know what row is like say rows a half okay then you could say a half over one minus a half which is one says that the time in the queue is about it is about uh 0.1 seconds or 0.5 seconds right so this is a very simple mm1q distribution and amusingly enough if you have a general service time which is not memoryless on the output um you can just say one plus c over two so the only difference in this is that c is now varying if you have something that's general and if you notice the difference between this first one and the second one if c equals one then one plus one over two is one and so these two are uh this one merges into that one when c equals one and you have a memoryless input okay now yes 126. there's some similarities here fortunately we're not going to go any further with than this okay are the dash is part of the equation no i'm sorry this is a little confusing this dash is the uh the dash is part of the um part of the uh powerpoint here i realize that's confusing my apologies in fact you know what i'll fix the slide when i put up the put up the um the pdf so that it doesn't have the dashes there because i agree that's bad so here's some uh results here if we know what the time in the queue is which we can just compute based on this if you know utilization you know the service time you get the time of the q from little's law we can get the length of the q all right if we uh we can compute rho by saying it's lambda over mu max or lambda times t sub s and so we can work this all out and find out for instance that the length of the q is rho squared over one minus rho i hope you've all seen this one over this one minus rho in the denominator that means that as rho gets closer to 1 what happens to this equation or all of these equations as the utilization goes to 1. what do we see infinity that's right so this is a curve that blows up all right just like you've been seeing okay and so rather than the ideal system performance we saw the moment we have some randomness on the input we suddenly have a we don't have that green curve instead we have the time in the q is rho over one minus rho and we get this okay so the latency goes up to infinity as we get close to mu max in our input rate which is the same as getting close to rho equal one okay and so this behavior is because of this these equations all have rho or one minus rho in them and rho is lambda over mu max so as lambda over mu max goes to one we blow up okay and so this is a very funny side effect of randomness on the input because if we had determinism on the input we would get the green curve okay look at the difference and obviously we wouldn't be going past one here either but we would have much less of a blow up okay so why does the latency blow up as we approach 100 because the cube builds up on each burst then it never drains out and so you got a problem okay very rarely do you get a chance to to drain and so pretty much i i think of this uh curve here is a as a indicator of all sorts of things in engineering and life for that matter you never want to get close to 100 utilization on anything because all of the things you're going to encounter have this blow up behavior as you get close to 100 percent and that's because there's just randomness in pretty much everything and just that little bit of randomness causes this weird behavior and now you got to worry about that 100 percent and you know think about it you got a bridge that's set at 100 um you know 100 tons you don't want to be running 99 tons over that bridge because you know the slight randomness on the input of that weight with some extra wind or whatever is going to cause a bridge to collapse and you got a problem okay one thing that's interesting is what we would call the half powerpoint which is a load at which the system delivers half of its peak performance okay because keep in mind that what we're seeing here is latency all right what is latency is the time for when i get into the front door of the mcdonald's to when i have my hamburger that's what i perceive as latency however what we do have and we do know is that when we look at this half power point where lambda is equal to mu max over two that's the point at which uh the servers that are at the counter are basically handling half as many hamburgers per unit time as they could it doesn't matter that i as a hamburger user see a really long latency i'm getting a lot of hamburgers out the door if i'm the mcdonald's in fact as i get closer to one i'm actually happy here because uh as a mcdonald's owner because i'm getting my maximum hamburger right out the door but from the standpoint of the overall system this half powerpoint is often a really good point to be because it's kind of that point just before things really go blow up on latency and so it's the point at which things are the system is operating pretty well once you get to the right of that now you got problems and you got to start worrying about there being basically too much load in the system okay and that's when you got to start thinking about this okay what do you do and you can do lots of things if i want to get lambda over mu max to be smaller i could make mu max bigger right how would i what's the simplest way to make mu max twice as big as it was before in the case of hamburgers anybody think about that add a server exactly double the restaurant if you double the number of people cooking hamburgers what you did was you pulled yourself back from the brink back to the half power point okay order from another mcdonald's yes you could do that too that's another server so the point here is that we could go for more servers or we could try to reduce lambda those are two ways of improving our current situation okay and and i wanted to close this a little bit so first of all i wanted to back up here and show you so i can compute if actually let's go back to this one if i know c and i can compute rho and i know t sub s then i can come up with t sub q which is then going to let me with little's law figure out the length of the queue so pretty much three items rho c and t sub s or different combinations of the one of these guys down below give me enough to come out with how long somebody waits in the queue which gives me enough to figure out uh what the length the number of items in the queue are so the way to come away from today's lecture is once you've figured out how to identify these different pieces then you can plug them in and you can get a back of the envelope estimation of where you are in the curve okay where are you in this curve here are you in the reasonable linear area here where a slight increase in utilization doesn't blow up the time or are you in the in the part here where a very slight increase in utilization suddenly gives you everybody a huge increase in average latency time okay that's what you want to get out of these equations okay and so let's take a look here just for a moment to remind you of the deterministic case right here's a case where something arrives it's going to get serviced another one's going to arrive service arrive service and i'm going to have the arrival be deterministic with no bursts and the service time is deterministic and what i see as a result is i can compute the average arrival rate and the average service time so the average arrival rate is one over the service time the average service time is one over service time and lambda is exactly equal to mu in this deterministic situation but it doesn't blow up why there's no randomness on the input right because i can exactly service a hundred percent if they're all uh point-to-point next to each other and things arrive at exactly the right rate okay so you can imagine this never happens in reality instead we have this where even though we have the same average arrival rate we put some burstiness in which is we have a bunch of them show up and a bunch of other ones show up and we have these long tails of time and what happens is when we get our burst we're going to start servicing them as quickly as we can because they're in the queue and then we have this little long tail where nothing happens for a while and then we get another burst and so on and why do we get this response time as we get close to 100 percent it's because when we got burstiness we've got these little gaps and we never have a chance to make up for our missing time okay so that's why burstiness leads to this curve uh of growth okay good so let me give you a little example here so suppose the user requests 10 8k disk ios per second the request and service times are exponentially distributed so that means that c is equal to one so exponentially distributed memory list those two are the same thing right average service time at the disk is 20 milliseconds which i'm going to say is controller plus seek plus rotational plus transfer added together on average it'll be 20 milliseconds and so we can now ask these questions like how utilized is the disk rho is equal to lambda times the service time okay so what's uh what's lambda here well lambda is 10 uh requests per second the service time is 20 milliseconds okay and so i can compute lambda is 10 per second the service time is 20 milliseconds which is 0.02 seconds don't forget to keep your units together and so row which is the server utilizations just lambda t sur so the utilization here is 0.2 so 0.2 is a low utilization so i know that i'm doing okay all right and um so the time in the queue is just the service time oh by the way i'll fix this this is rho over one minus rho sometimes people use u as utilization so rho over 1 minus rho is 20 times 0.2 over 1 minus 0.2 i compute that i get 5 milliseconds or 0.005 seconds so the time i'm sitting in the queue is only 5 milliseconds the time service from the disk is 20 the total time from when i submit the request to when i'm done is 25 milliseconds right that's the sum okay and the average length of the queue here is only 0.05 so this q is really not building up right it's got an average 0.05 things in it if i uh make the request much faster i will very quickly get to where the cube completely dominates all of the time on this all right good questions before i put this and i'll fix this u over 1 minus u this is rho over 1 minus rho here sorry i switched my notation to be consistent with somebody else and i missed one all right good um so the average time so never forget this right how long do i sit in mcdonald's it's my time in my queue plus the time being served so in this case it's the 20 milliseconds being served and the 5 milliseconds in the queue gives me 25 total milliseconds okay good so you're now good to go on solving a cueing theory problem okay there's a bunch of good resources that we have up on the resources part so you can take a look at some readings and so on okay and there's some previous midterms with queuing theory questions as well but you should assume that maybe queueing theory is fair game for midterm three now um so now we can um how do we improve performance if our queue is going crazy we can make everything faster okay well we get uh we we hire a bunch of really crazy hamburger fryers and we give them you know 10 times the heat on the grill and they have to flip really fast and maybe that's faster or we could put more of them okay steroids that's right hamburger flippers on steroids we could have more more parallelism that's a more reasonable thing to do right um we could optimize the bottleneck well we could figure out what is the bottleneck in frying hamburgers maybe it's getting the hamburgers from the back who knows what it is but we could optimize that to make the overall service times better okay and we could do other useful work while waiting so that's kind of what we do with paging where we switch to another process to run it while we're waiting for the disc to complete our paging okay cues are in general good things because they absorb bursts and smooth the flow but anytime you have a cue you have the potential for a response time behavior that goes like this okay and so cues are are both a blessing and a curse from that standpoint and oftentimes what you do is you limit the maximum size of a q so that the bursts are too much then what happens is you put back pressure and you slow down whoever is generating the requests by explicitly telling them they can't submit anymore because the queue's full so that's a that's a response to a cubing to full and a lot of systems do that as well okay and you could have finite cues for admission control and that's what i just said okay all right questions now when is disk performance the highest it's the highest when there are big sequential reads right what's that mean that means that i move the head and i rotate to get the starting point and then i just read a whole bunch of blocks off the disk a whole bunch of sectors okay or when there's so much work to do that um you have many requests and what you do is you piggyback them together and you move the disk uh in a way that optimizes for all the set of requests that are out there rather than individual ones which may cause you to move around okay and when the disk is not busy it's okay to be mostly idle okay so births are bad because they fill queues up but they are an opportunity because if we have a bunch of requests we may be able to reorder things and get better overall efficiency of our disks okay and so you can come up with many other ways of optimization here you know maybe you waste space by replicating things and so that when you go to read it's faster so when we talk about raid one of the things we get out of raid is we have multiple copies of things which make it faster to read when we're under high load because we can choose to get our data off of any of many different disks at a time okay so it gives us a way to do parallelism we have may have user level drivers to try to reduce cueing as represented by software in the kernel maybe we could reduce the i o delays by doing other useful work in the meantime there are many ways of making things faster okay but i want to close out this discussion i was going to talk a little bit about the fat file system today but i think i'll save that for next time but i do want to say a little bit about scheduling to make things faster okay that's useful from a disk standpoints so suppose um we recognize the fact that the head is assembly is is stuck together and so we have to move the head as a unit and so um how do we optimize this thing because the anytime we deal with mechanical movement like moving in the head or waiting for a rotation to happen things slow down okay and so uh if we allow ourselves to queue up a bunch of requests we could do one thing which is the obvious one which is we handle the first request this is basically saying we go to track two sector three track two sector one track three sector ten track seven sector two so we could take them in the exact order in which they were cued and that would be okay i guess except that we could very easily have to go all the way into the to the inside of the disc and then all the way to the outside and back to the end and so on because we have a set of requests that don't have any locality with them the alternative is to try to optimize for our head movement okay and so this one example here is the sstf or shortest seek time first option where you pick the request that's closest on the disk and so if the disk head is here i might go request one then request two then request three then request four and what i'm doing is i'm reordering my requests so that they're one two three four so that the disk head is kind of spiraling its way out in a single movement okay and so this is um and although this is called sstf today you have to include all sorts of things like rotational delays in the calculation since it's not just about optimizing for seek you also have to optimize for rotation uh the pros of this is that you can minimize your your head movement as long as you have a bunch of things queued up the cons is you can lead to starvation because it's possible that if a bunch of things keep arriving on the queue and you force the disc head to keep servicing things uh in the local area you know maybe on the inner part of the disc inner tracks you may never get to the outer tracks and so sstf could optionally um even as you're limiting the disk movement you're causing a lot of requests to get stuck and never serviced okay so that's a problem and now this kind of goes back to our scheduling when we were talking about cpu scheduling where we could end up with low priority tasks never essentially getting any cpu what's a low priority read well low priority read in this case is something that's far away in tracks relative to the uh continually arriving requests now another thing we could do is uh which is often called the elevator algorithm is we take the set of requests and rather than doing that movement on the fly by taking a look at the cue we instead move in a single direction at a time so we started a given track we spiral our way out then we spiral our way in and so on and as we're doing that we grab all of the requests that are relevant to our given direction and position okay and you can see why this is called the elevator algorithm because it's just like an elevator just rotate this side on its side and imagine an elevator going up and down that's exactly what happens it sort of stops at each floor services people and so on the analogy of which floor is of course which cylinder you're on and we deal with that by uh by sorting the input requests okay now one of the things that we might worry about this is that this has a tendency to favor tracks that are in the middle because we're kind of going out and coming back in and there's a lot more time kind of spent on the inside and so there is something called circular scan which is normally a little better where we always uh service going in one direction and we have a very quick spin back to another place and go out okay so that's uh that's the circular scan or c scan questions now you might imagine asking who does this well clearly the operating system could right the operating system could take a look at all the requests that are it's waiting on and it could do a reordering of them so as to do either the elevator or the faster the c-scan algorithm and thereby optimize head movement so remember that this is only useful when we have a full cue if we have an empty queue it doesn't matter because we're not um trying we're not overloading the resource um so we wait you know when there's a queue then we re we reorder it based on c scan now the issue that's of interest which can anybody tell me about modern disks and possibly optimizing like this is this something that the operating system wants to do what do you think could be a what could be a downside of the operating system doing this can anybody think i think people are thinking too hard okay very good we have some interesting uh comments in the chat so first of all um the operating system has to know the head location so that's a that's an issue certainly and we'll uh we'll talk more about this moving forward but um in modern discs the controller takes in a series of requests and does all of this reordering itself so in many cases the modern operating system and device driver doesn't even know exactly where the disk head is or how the logical block ids actually map to physical blocks so that's one issue the second issue is that the modern controllers actually take a bunch of requests in and do the elevator algorithm themselves and so the operating system trying to do that and by the operating system i mean the device driver as well the issue with that trying to be computed on the host is that the disk itself is already doing a lot of that stuff because they're much more intelligent than you might think today so while in the old days this kind of disk scheduling was definitely done by the operating system device driver combination today uh some of it is still done but it's a bit redundant with what the disk can do okay so i want to finish up uh actually i think we'll pick this up next time so in conclusion we talked about disk performance uh a lot last time and we've brought it back by talking about cueing time plus controller plus seek plus rotational plus transfer time we talked about rotational latency right so that's the on average half of a rotation the transfer time has to do with the spec of the disk as to how fast it is um pulling things off the disk technically it also depends on whether you're reading from the outer track or the inner track because the transfer times are faster on the outer tracks but usually we give you an average transfer time this queuing time uh was something that we didn't talk about initially but the devices have a very complex interaction and performance characteristics we talked about q plus overhead plus transfer um and the question of sort of an effective bandwidth which varies based on the devices we talked about that last time this queue is really an interesting thing right so the file system which we we haven't quite gotten to is really going to need to optimize performance and reliability relative to a bunch of these uh different parameters and the other thing that we talked a lot about today is the fact that bursts and high utilization in introduce queuing delays and so finally the skewing latency for mm1 which is memoryless input memoryless output one queue or mimilous input general output 1q are very simplest to analyze and basically you can say that the time in the q is the time to service times this one half one plus c factor times rho over one minus rho and that goes to infinity as latency as uh utilization goes to 100 percent okay um next time we'll talk a lot more about file systems uh we didn't get to them today but um we'll pick up with the fat file system and then we'll move on to which is in use today and then we'll move on to some real file systems uh that are more interesting than the fat file system next time as well so i'm gonna uh bid to do it to everybody please vote tomorrow very important um try not to be stressed about it uh i think that it'll all work out well in the grand scheme alrighty you have a good night |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_20_Filesystems_2_Filesystem_Design_Cont_Filesystem_Case_Studies.txt | welcome back to 162 everybody um i don't know if you're uh like me it's uh pretty hard to turn away from the continuous state counts in the election but uh let's get to some operating systems um we've been talking about how to actually store information on devices and giving give us the proper abstraction to uh of files and directories and so on and we're gonna get much more into that today so we started by talking about devices and performance so if you remember one of the things that we talked about is how important it is to keep your overhead down and so we talked about this particular graph of effective bandwidth and it said that even though we have a gigabit per second link um our effective bandwidth can be a lot lower and the reason is that there's this big overhead uh for instance a millisecond that affects our bandwidth so this uh this graph kind of showed you the size of a packet along the x-axis and how long it takes to send it and um you know this is going to be a linear graph uh with respect to the size of the packet but it has an intercept that's not zero and so that means that the effective bandwidth which is how much you actually get how many bytes per unit time you actually get is more like this red curve and you have to get past a certain point before you can even get half of a gigabit per second okay and so when we look at our file systems coming up we're going to want to make sure that we somehow keep the effective bandwidth as low uh or keep the effective bandwidth as close to the real bandwidth as possible and that's going to mean we have to keep our overhead low okay um the other thing we talked about in some detail last time was uh performance and queuing theory and i just wanted to uh remind you of what we came up with so among other things we talked about how there can be many examples of queues feeding into servers okay and uh and oftentimes you can take a much bigger system with lots of queues and lots of intermediate servers and boil it down to this okay and boiling it down to this we basically were talking about a system in equilibrium among other things and so this q is neither uh growing or shrinking without bound uh probabilistically we have an arrival rate that's uh some lambda per unit time and we have a service rate which is mu and by the way mu can be also one over the uh average service time of the server all right and so the different parameters we talked about are lambda is the average or mean number of arriving customers per second t sur is the average time to service a customer sometimes that's called the mean or the m1 moment we have c which is this squared coefficient of variance and that's the standard deviation squared over m1 squared this is unitless and what's amazing about that is that our result c basically tells us all we need to know we don't need to know how complicated the overall probability distribution was so this server is allowed to have an arbitrarily complicated service distribution and we can compute the uh the average service rate by taking one over that uh average time this arrival rate however is memoryless in our model so we're keeping things very simple here um computing from our parameters so if we know the the average time to serve or the average time to get your hamburger at mcdonald's then by one over that is mu okay the server utilization is lambda over mu or multiplying this out lambda times t sir and the interesting thing we talked about is the fact that this server utilization can't be bigger than one can anybody remember why it can't be bigger than one what happens if rho is greater than one what happens if rho is greater than one q grows without bound yep uh that's problem okay so the parameters we might wish to compute so if you notice here these three red ones are enough to compute the green ones and pretty much everything else so if you have pretty much three out of these five then you can compute the others and notice by the way that c is talking about the standard deviation and the service time of the server since this is a general potentially a general probability distribution so parameters we might care about is how long are you in the queue that's t sub q or what's the length of the queue on average okay and using little's law which we talked about last time that's lambda times tq and some results that matter here now are for instance if you have a memoryless arrival rate and a memoryless server rate or an mm1q then um the time in the queue turns out to be the service time times row over one minus rho and if you have a general distribution where it's not memoryless going out then notice the difference between these two is very it's very close it's still the service time times this extra factor of one half one plus c times row over one minus rho and the interesting question um that we sort of confronted last time was why is it that um the service latency blows up and the answer is the simple answer is that all of these models come out with this row over one minus row factor so that as rho goes to one this thing blows up to infinity and so really this uh behavior that we're seeing here um potentially going to infinity if there's an infinite q is uh solely the fact of this row over one minus rho all right so you should be able i gave some examples at the end of the lecture last time so um i would say you might want to remember them um this could be a useful thing for midterm three um the the important thing to keep in mind is uh i would go through the last lecture to understand where these numbers what these numbers mean but uh the most important sort of back of the envelope queuing theory results we've talked about in this class are these two uh the mm1q and the mg1 queue and the one by the way at the end just means there's a single queue and it's or a single server excuse me so we could have multiple servers and then the uh equations would be a little different but uh if we were gonna do that to you we would give you the equation all right any questions before i move on so once you've got these equations by the way it's just a matter of plugging and playing so you get some estimates of you know if this is disk service rates you might have estimates of how often uh requests come in from the user uh processes to the kernel this queue might be queuing in the kernel um and this service rate has to do with the disk drive uh which we gave you a a way of computing how long it takes to do something with the disk and so uh once you know some of these parameters then you could make an estimate is my queue gonna blow up um or you know do i need another disk in order to get more service rate here okay good so we also talked about a couple of lectures ago a few ways of hiding i o latency which i wanted to just bring up because um you know as we start designing file systems you'll be able to start seeing where we can put some of these uh different options in here so blocking interface of course is the one that you've learned from pretty much lecture number two or whatever which says that i do a read and i tell they say i want to read so many bytes and what happens is the system call doesn't return until we have that number of bytes or until there's an end to file when we go to write i'd say write these number of bytes and it won't return until they've all been written or if it does return it at least tell us how many bytes have been written okay and so that's a blocking interface the other two i gave you a few lectures ago were the non-blocking and the asynchronous the non-blocking interface basically says do what you can immediately and then come back and tell me don't wait so the non-blocking interface may require you to process it in a loop but you'll never be blocked waiting the asynchronous interface is what i like to call the tell me later interface this is an example where you uh the user code hands a buffer to the kernel and says well do do my read of 100 bytes and put it in this buffer and then it immediately returns from the kernel but you get a signal later that says it's ready okay and so those are those are kind of two asynchronous options and the reason they're interesting to bring up is pretty much in the kernel um the non-blocking and asynchronous interfaces are really what the devices provide okay they don't provide blocking that's something that we give to the processes and it's an abstraction okay so the asynchronous interface is exactly like a type of callback yes okay and uh if you're interested you can often turn this on for file systems and other devices by using the i octal interface on the file descriptor after you've opened it so that's a good question okay so if you remember we've had this kind of diagram almost from day one we talked about lecture four even we talked about a bunch of different ways of accessing files like streams and file descriptors et cetera so that's the the f open and versus open and then we've talked a bunch about devices over the last couple of lectures and so today we're going to talk about what's in the middle and what's in the middle is interesting because above we we have this abstraction of bytes streams okay streams of bytes where we can ask for 12 bytes or 13 bytes or whatever underneath we know that there are blocks okay we talked about disks having sectors or multiple sectors together giving you a block and so that's not byte oriented that's block oriented so somehow in the middle the file system has to provide a matching between the blocks underneath and the streams above and that's what the file system is going to help us do okay and of course the things you're all used to with files like looking them up in directories opening them closing them writing them all of that stuff needs this thing in red here to work properly and so that's what our next couple of lectures are about okay so how do we go from storage to file system so up at the top level here uh we have variable size buffers and uh the api and sys calls that we're using uh are all about give me this number of bytes and maybe we set the offset give me this number of bytes at some offset or write these bytes at some offset underneath is the file system which is a block based interface and a typical block that we might talk about is four kilobytes okay that's a pretty common block size which you should recognize from when we were talking about virtual memory as well the devices underneath mostly mapped to these blocks okay so underneath we have uh sectors which are smaller than a block but typically we put a bunch of sectors together on a track on a disk and we call that a block and so the physical sector being the minimum chunk of bytes that you could read or write could be either 512 bytes that's pretty standard or the really big drives that we have these days four kilobytes okay and so um that's the basic uh chunk of bytes that you can read and write and and so uh somehow again we're going to have to go from this variable size up top through the block interface to the actual physical interface of the disk drive and one of the things we're not going to talk about today but next time is we want to try to put some sort of caching in here or something to make this faster okay because we you know i've sort of joked at various times this term that pretty much everything in operating systems is a cache okay and so um obviously they're gonna be a cache somewhere here but we're gonna deal with structure first and then we'll cache it later and we also talked about ssds or flash based disk drives so one of the things that's different than just raw flash chips is when you put it into an ssd this interface between uh the operating system through the device driver and the device has a lot of similarities between a disk drive and an ssd and in fact there is a a layer in there that makes that ssd look like spinning storage except it doesn't have the seek time and the rotational time to slow you up okay and if you notice the other thing that's very unique about the ssd which i mentioned you i'll and if we get to uh something at the end of the lecture today i'll show you again is that this uh ssd the blocks are things that can never be overwritten okay you have to take an erased block and write to it you can't take a block you've already written to and change the bytes what you have to do is if you're going to change a physical block what you really have to do is find a new physical block copy everything over except for what you want to change and then the previous block gets garbage collected and the way that the operating system doesn't have to deal with that is this translation layer and so the the logical block numbers that the file system and the device driver think you're using are actually translated inside the ssd to the physical blocks on uh of flash memory and that translation layer and the firmware is responsible for making sure that things don't wear out uh so that the you're you're not over using some particular physical block by writing it erasing it writing it erasing it over and over again instead there's actually wear leveling firmware that makes sure that the ssd doesn't get overwritten okay and so then that's the other thing that needs to be kept track of is a bunch of erasures that happens you actually have to work to make sure that you erase a bunch of blocks that aren't in use anymore so that you can um have them ready to go all right and so that's a fundamentally different aspect from hard disk drives where you can actually override the sectors okay but the interface is pretty much the same it's uh dealing with four kilobyte blocks that are read and written it's just the underlying physical things a little different but we're popping up okay so um now there's a good question here uh so if you overwrite a block with zeros to erase the file is there any way to tell the ssd to actually erase it that's a really good question and uh the answer is not always yes so um modern uh there are some ssds that have the ability to encrypt things natively on on the drive itself and then you have a little more control over it but just because you write a bunch of zeros into block number 536 uh absolutely means nothing in terms of what actually happened to the data underneath because you're writing to a completely new block okay that's a good question now oops where am i here okay so how do we build a file system so if what's a file system a file system is a layer of the operating system that transforms the block interface of the disks into files or directories or things you're used to and so this is a classic operating system situation that you're very familiar with hopefully by now been doing this all term where you take limited hardware interface which is an array of blocks and you provide a new virtualized interface that's much more convenient and provides in this instance a whole bunch of features like naming so we can file fi find files by name not necessarily block numbers we can order organize the file names inside of directories uh we can map files into blocks and figure out which blocks belong to which files and then of course things like protection and reliability are important things as well which is we want to enforce the access restrictions to prevent uh you know unauthorized parties from reading and writing files that they're not supposed to and reliability we're going to want to put some level of redundancy into the system to make sure we don't lose our data even though we have crashes and hardware failures etc okay so um this level of abstracting is really what the file system's about and i'm going to give you another a number of uh of uh actual case studies to show you how people have done that um in several file systems that are currently actively used so um again what we just said a little bit ago but i wanted to repeat this is the user's view of files is that they're durable data structures that you put the data in and it doesn't go away the systems view of course is that it's a collection of bytes that's the unix view at the system call level and it doesn't really matter what data structures you put in the disk so the interesting thing is the user only really knows how to interpret the bytes unix makes no restrictions on how you structure those bytes it's entirely up to you so from the system's point of view it's a bag of bytes and then when you get underneath the uh system call interface and into the actual file system and so on and caching system the systems view then underneath there becomes a collection of blocks because the block is the logical transfer unit okay and the block size typically is bigger than the sector size where the sector is the physical transfer unit that's the minimum transfer unit on the disk we bring it into blocks because uh typically like the sector is 512 bytes that's just too small and so we turn a bunch of sectors into a block and that's what we read and write off of the disk all right so you can kind of look at this you know here's the user they have a file full of bytes they talk to the file system the file system talks to the disk and when all is said and done the user thinks they have files that are a bunch of bytes okay so that's our goal so just to hammer this home a little bit what happens if the user says give me bytes 2 through 12. well what happens is the file system has to fetch the block that has those bytes in it so that block might be on disk okay in which case it's got to pull it in to a cache and then since that's probably 400 excuse me since that's probably 4 kilobytes it has to figure out where bytes 2 through 12 are package them up into the user's buffer and return it okay now it's quite possible that if this is the second time we asked for um you know when we go to ask for bytes 13 through 36 maybe that block's already in the cache and we don't actually have to go out to the disk now it's an interesting question here what if you have multiple files with different permissions in the same block so the answer is that doesn't really happen that's a that's a bit of a failure of the file system because right now the file system provides a one-to-one mapping between files and underlying blocks so the the permissions are on the files not on the individual blocks because the blocks are assembled into files and the metadata for permissions are actually in the inode which i'll show you in a little bit okay good question though so what happens if we go to write bytes 2 through 12. this is a little trickier and i wanted to make sure this is clear so you have to actually since you can only deal with blocks at the disk level you have to pull your block in overwrite bytes through 2 through 12 and then write it back out before you can modify bytes 2 through 12. you can't actually go in here and only write a few bytes on the disk okay it's just not it's not possible so we have to make sure that we you can start to see why having blocks stored in ram at least temporarily is going to be really important because at minimum we're going to need to bring a block in override a couple of bytes and store it back out we're going to do much better than that when we get to the block cache but we're not there yet and of course everything inside the file system itself is in terms of hole size blocks the actual i o happens in blocks and any reading and writing of something smaller has to happen across this file file system interface okay now so how do we manage a disk so uh we're gonna in the next um i don't know i'm gonna say half an hour or whatever we're gonna talk specifically about disk drives but we're gonna um generalize some ideas about how do we manage a disk so basic entities on a disk that we're going to want to have is we're going to want to be able to have files and directories okay so file is a user visible group of blocks arranged in some logical space or what i like to say is a bag of of bytes a directory is a user usable excuse me a directory is a user visible index mapping names to files so um we're going to have to figure out how to do that so that we can turn a file name into something that's a file okay and so that's going to be part of what the the file system does so the disk is a linear array of sectors and so how do you identify those sectors well there's a couple of ways to do that one which was kind of in the original disks uh before they got too big a sector was really just a vector of which cylinder surface and sector it's on so if you remember a cylinder is all of the tracks that are on top of each other and it really represents the positioning of the the head assembly the surface is which one top or bottom and which platter so that's which surface you're on and then which sector so the sector itself is a three uh tuple here defining where that thing is on the disk it's not really used anymore um and uh one of the reasons was things got so big that the bios's weren't able to keep up with it and in this instance the os slash bios which is lower level in the os had to deal with bad sectors and the disk just got so big that it wasn't working anymore so at some point we switched over to the logical block addressing where every sector has an integer address starting from one and working its way up to the size of the disk and the controller does a mapping from the uh integer number to a physical position and shields the os from the structure of the disk okay so the ssds don't actually expose a cylinder surface sector interface either so that was a good question in the chat pretty much this logical block addressing is uh what had pretty much taken hold before ssds were really very popular so pretty much the ssds are giving you this lba level of an interface which is a logical ordering of blocks from one to end okay now this has some consequences so if you recall from last lecture we talked a bit about elevator algorithms to uh basically take a bunch of requests and rearrange them so that the disc would do a nice clean sweep rather than randomly going all over the place once you have logical block addresses now you're only really guessing that somehow blocks that are next to each other are close to each other and in the same track you don't quite have the same level of information that you had before but operating systems still try to do a job of optimizing for locality it's just not quite as precise as it was back in the days with physical positioning that was cylinder surface sector now what does a file system really need to to work well it has to track which disk blocks are free okay and in the case of the ssd it's also tracking which blocks and i say that in quotes are free it's just it knows the logical block number it doesn't really know what physical part of the flash chips are storing that but it has a notion that there are these blocks and some of them are free and some aren't so it's still doing that same idea which is tracking the disc blocks and you need to know that so that you can know when to where to put your newly written data you need to track which blocks contain data for which files okay so that you know when you go to open a file and you start reading from block from bytes 2 through 12 first thing you got to figure out is where is the first block of that file well it's on disk somewhere how do you know which one it is well that's the file systems problem right it also has to track files in a directory so that you can look them up by file name okay again that's the file systems problem and where do you put all this well since we need to be able to shut the whole system down and come back and our data is still there all of this stuff has to be on disk somewhere so not only does the disk hold all the data but it's got to hold all this metadata in a way that we can um start from scratch once we turn the file system want to turn the operating system on and reboot the machine so we are you know you could say there's a little bit of a recursive issue here but somehow the information that's tracking the files we need to put that information also on disk possibly in a file and so um if you if you are tracking that you can see that perhaps there's something about standard positions for the root file system or something like that and we'll talk about that in a moment all right questions okay you guys with me so far now what are what's the story with putting data structures on div on disk it's a bit different than data structures in memory so in memory i could have pointers to things that are arbitrary byte pointers and i can do linked lists and stuff the ideas there are the same except that the the data structures have to be made out of these uh minimum quanta of blocks and that kind of changes what data structures we use a little bit and it turns out once we start worrying about performance we're also going to be very careful about which blocks are next to each other on the disk because we're going to want to try to keep them next to each other in the file as well because that'll give us better performance so the other thing is we we can only access one block at a time so you can't really efficiently read write a single word we already said that you have to rewrite the whole block containing it and ideally you want sequential access patterns where you sort of write a bunch of stuff along a track on the disk all right and you can imagine that with ssds as i've told you every time you go to write something you actually have to allocate a brand new um erased block under the covers and use that to do your overriding and so part of this has to do with being careful about how much erasing and reallocating you're asking the flash translation unit to do and so flash aware file systems are a little bit careful about when they even decide to read and write blocks okay and if we get to that at the very end of the lecture i'll tell you a little bit about f2fs which is one of the flash file systems that's in use these days so now the other things to start thinking about is when you go to write something on disk it takes a little while to get there and and furthermore if we have these data structures that are on disk and they have to look a certain way there's some consistency in those data structures ideally when we go to shut the whole system down and turn it off the disk is a completely meaningful consistent state and i don't know if any of you have ever lost data because your machine crashed at the wrong time i'm sure there are many of you uh then you'll know that the uh the file system doesn't always shut down in a clean state and so although we won't get to this this time next time we're definitely gonna start talking a bit about journaling and some of the other techniques for making sure that data is never lost even when we have sudden shutdowns okay so that's going to be important okay now let's meet i don't have a lot of administrivia we're almost almost almost done grading so um i'm i'm i feel almost like i'm talking about counting votes in the current election uh we we g we're cut we're getting there it's going to happen um and uh so we'll you'll know as soon as we're ready um the other thing is uh and i think everybody's probably done this but make sure to fill out post-midterm survey let us know what we're doing how we can improve and the other thing which i'm not sure we put into the survey but you're welcome to forward to me individually is uh if there are any particular topics you'd like to talk about in the last lecture or to let me know um and i'll you know i might throw together uh an interesting lecture with topics that were requested by people so um feel free to take advantage of that i've actually had people ask me about things like quantum computing which is not really 162 but i'm more than happy to to talk about things as long as i can say something meaningful about it um yes uh i would say that the results of the midterm grading just uh is going to be far less contentious than the results of the election we shall see but uh as you have seen for those of you that are watching the counting uh of the election uh slow and steady is the name of the game so this is all about taking a breath which is good breathing is good by the way too the other thing i wanted to point out is if you have any group issues going on whatever make sure your ta knows about this make sure you reflect those issues in your group evaluations give us some good feedback because we will take all of these things into account uh at the end of the term so i don't think there's anything else to say about administrivia um i uh i think that's pretty much what i had to say so i think we're good to go unless anybody had any questions the term is kind of winding down we're down to like the last maybe six or seven lectures sometimes i do my special lecture into our week we might do that but uh we're getting down to the last few so okay no more questions so let's talk about designing file systems so what are some critical factors well clearly okay and and hard disks are a good example of trying to get performance out of a less than ideal device from a performance standpoint because if you have to seek and then you have to wait for rotational latency and then you could read that's going to take a long time and i showed you several examples a couple of lectures ago which showed you the difference between if you have to seek your total bandwidth goes way down versus if you don't have to seek your bandwidth is much higher and so we're going to want to do a good job of the same thing in our file system so that's going to be important okay and this by the way is hard to get right what's great about this by the way just to put one more uh point on this is when you get to ssds then randomly writing things in your logical block space is or reading from it is no longer a performance issue because every block pretty much it takes about the same amount of time to read and so some of the optimizations for disk drives are less required in ssds okay now other things that really just feed into the unix view of the world is that we always have to do open before reading and writing so you know you just think about that you do an open system call and you get file descriptor back and then you can do reads and writes what's good about that model is you can perform protection checks on open figure out where the file resources are in advance and then from that point on you're really just accessing the blocks directly and this fits in a little bit earlier there was a question about what if you know you had different permissions on the block from different people they just don't do that in these typical file systems okay all of the permissions are attached to the file as a whole now uh in the last couple of lectures we're going to expand quite a bit where we start talking about file systems that might actually span the globe and that excuse me in that instance you can't necessarily trust that the permissions that have been checked on open are going to be kept when you're talking to uh data that's being stored somewhere in antarctica or wherever it happens to be and so we might have to adapt slightly different behavior once we get there okay excuse me so the other thing is kind of a side effect of unix which is the size of the file is determined as you use it you think about this a second you open a file and then typically you write bytes to it and then you close the file and so the file system really has no idea how big your file is going to be until you actually close it or if you open it you write some stuff you close it and then you go back later and you open it you append some stuff and close it now the file is also growing incrementally and so to any extent that the file system is going to optimize the placement of your bytes to try to make everything fast runs into this unfortunate problem here that it doesn't really know how big your file is okay the other thing we're going to need to do is organize everything into directories and so we have to figure out what data structure that is okay and then finally we're going to need to very carefully allocate in free blocks so that our access remains efficient and we can hopefully minimize seeks as i started out here maximizing sequential access those two things are going to be uh very important for us in our design okay so what are some important components of a file system so we have your file path which is the name you go somehow into a directory structure that's going to give us something we call an i number which is really a pointer into uh an inode array we'll get to that a little bit and what is an inode and i know it is basically a file header structure that points out which blocks belong in the file so think of this as like an index or like a big array that sort of translates from a position in that array to which data block is in the file and this file header structure is the thing that's going to get modified as i uh read and write the file so i write the file make it bigger i'm going to be adding entries here when i allocate a brand new file by opening it with create what i'm doing is i'm allocating a brand new inode just for that new file now the interesting question here also that's in the chat here is does error checking usually depend on the block device or the file system so there's a lot of layers of error checking we'll talk about those next lecture but just as a simple thing to point out the data sectors themselves have a whole bunch of reed solomon bits on them that uh so you actually write more bits than your think you're writing and that allows it to handle a lot of read errors just off the disk and then once we want to really deal with the fact that maybe a whole disc could uh die then we start doing stuff like raid and so on which we'll talk more so mostly the error checking at one level is on the disk itself and then at another level we use redundancy by writing to multiple disk drives in order to deal with a drive failure um the i here is just for inode so inode uh is uh index is what the i stands for and so it's an index node and this is the index node number i number or whatever but all right now if you remember by the way way back when we talked about the abstract representation of a process it's got some thread register it's got some address space and so on the file descriptor table is in the uh process descriptor okay and that basically transforms numbers to open file descriptions if you remember and the way we talked about it you can go back and look and i don't know lecture six or something was we said well this file description keeps track of uh what the file name is and what position you're currently adding that file name so that when you're reading and writing it can kind of pick up where you left off in reality what's actually being stored in the open file description is the current i number because uh if you remember we open the file first and that's where we trace the name all the way through the directory structure and then eventually we find the i number which points at the actual file and that i number now is what we use when we read and write so you can actually get into a situation where you open a file and then it gets moved and you can continue to read and write it even though it's moved it somewhere other than what your name pointed at and that's because the open has uh held on to the i number not the name okay all right now so we take the file name uh and we look that up in a directory structure which gives us the file number um so open performs the name resolution we're going to have to figure out how to do that translating path's name into a file number read and write operates on the file number and use the file number as an index to locate the block and so the file number goes into the index structure to the storage block and that's on disk and so really you're going to figure out well i'm at offset suppose i was to go to 5k 5000 in some file well that's going to be in the second block because the first block is 0 to 409 5 and then the second block is going to handle the next set of bytes and so i'm going to look that up in my index structure and find out where the second block is or block number one that's going to point at the disk somewhere and so i know that when i go to access byte 5000 i'll know which block it is so we're going to have to look at both how the directory works and how this inode structure works to help us find uh which block is of interest to us okay so there's several components which we're going to talk about in the next few slides one is what's a directory look like what is it exactly another is what's that actual index structure a third is we're going to talk about storage blocks and the free space map a lot of these choices in here of these four pieces at least are things that vary depending on what file system you're using okay so let's uh first ask our question how do we get the file number well you look it up in the directory so a directory is really a file in most file systems containing file name file number mappings okay and so basically a directory is just a file and you go in that directory and you find the file name you're looking for and that gives you the file number and as a result of that then you can know uh get the index structure and know where to look on disk the file number could be a file or another directory could point out a file or another directory so really the way you go through slash a slash b c slash d is you find slash and in slash you look up a which points to to uh to directory a and then in directory a you look up b and so on and so it's a chain of lookups through multiple different directory structures okay and so each file name file number mapping is actually called a directory entry okay now the processes are never allowed to read the raw bytes of a directory so if you try to open a directory it doesn't really work properly okay and so the i what i said earlier is that by and large unix doesn't care about the format of the data in files the one point at which that's not true is the directory format because the directory format something that's directly interpreted interpreted boy i'm losing it today sorry directly interpreted by the uh operating system okay um this is from watching uh vote counts for too long i think i'm going slowly crazy but anyway so um instead there's actually something called a reader system call you can look it up do a man on it which iterates over this map without revealing the bytes okay um so why shouldn't we let the os read and write the bytes of the directory well because they might screw it up okay and so pretty much the um read directory right directory create all of that stuff are operations that cause changes to directories indirectly okay so just keep that in mind but by and large except for the format inside a directory a directory is just a file and so keep that in mind because we're going to be building files using our file system and we're just going to use those files to hold to store data or to store directory mappings and so um the basic bag of bits that and bytes that we end up using for our directory is something we're going to get out of our file mapping okay so here's directories just in case you know this is what you get on a mac os just the idea of these folders are something that kind of came up graphically 20 years ago or whatever but basically what we're seeing here is this top level directory has a directory in it called static and that static directory has in it a bunch of other things which have for instance homework and inside of that might have homework0.pdf this is a set of directories that we search until we eventually get to the actual file okay so the directory abstraction just to say a little bit more so directories that's what these blue things are here are specialized files contents uh with lists of pairs of file name and file number so in the slash usr directory what you see here is a pointer to lib 4.3 is actually a pointer to this directory a pointer to lib is this one inside the lib 4.3 directory there's a pointer to foo which is this actual file okay so these pointers are really just links uh their i numbers which point at the inode structure that describes this file which happens to be a directory in this instance okay so the system calls to create directories open create read directory traverse the structure so notice that open and create and things like that actually add things to directories you can do read deer to read your way through all the entries there's make directory and remove directory you guys know about that that would be the way that for instance the original lib 4.3 got put into slash usr and then there's link and unlink which allow you to mess with these actual links okay and there's a bunch of libsy support for iterating through uh the directories so you should take a look but there's like open directory and then once you get back to directory star then you can read the next entry from it and you can process it in various ways so there's a whole series of system calls that have been made just to traverse this directory tree which is something that you end up doing almost for sure if you ever have if you ever write an application that's got to talk to files okay so what's the directory structure well let's take a look here i'm just going to hammer this home i said this earlier so how many disk accesses does it take to resolve say slash my book slash my slash well you first have to read in the file header for the root directory so that's the slash directory and that turns out is at a fixed spot on the disk somewhere so one of the things that a file system gives you is um the place of the root inode for the root directory okay and then you read in the first data block for the root so remember the root is just a file so i read in the first block of the file and i start traversing the directory and eventually hopefully i find uh you know it's a table of name index pairs and i search it linearly to find the word my or the name my in it and uh you can search it linearly in most standard unices okay and so uh that linear search becomes a really big problem if you have a directory with lots of entries in it which sometimes automatically generated directories are that way now the question here is if the root is at a fixed place does that mean it has a maximum size so the answer is no the thing that's at a fixed place is the inode index structure not the file blocks and the there is a maximum file size in a typical um file system but that's much larger than you'd ever fill up with a directory okay so your the fixed thing is the inode not the data you'll see that a little bit more as we get there so then you read in the header for my so yeah that's another reference and then you look through my defined book and then you read in the header for book and header by the way is the same as inode and then you read in the data block for book you search for count read the file header for count and at that point i now have the um i now have the inode for the actual file and that can go in you know that's basically cached for all my reads and writes at that point on okay in the description now um in a file descriptor points at the description which uh holds on to the the header for count all right now the question here might uh is a good one which is why not just store the full path in a big hash table so the answer is um there are some file systems that do that uh where so what you're basically saying is you take my slash book slash count and you um you map that to the inode you could do that uh except that then that makes management a little harder because um you know typically you link a new directory and if you think about you make a directory and then you add things to it and so the directory structure itself is typically organized the way we're talking about but it's not impossible to organize it as a hash table okay but let's let's organize it this way for now all right because this is this is closer to what uh most file systems do it's easy it's simpler than a huge system-wide hash table because you're not storing you're not having to worry where to store the hash table you know if that answers that question or not okay but it's not it's not a uh out of the question and there are some file systems that have chosen to do it that way so the other thing that we mentioned kind of way back when was current working directory which is uh basically a per address-based pointer to a directory that's used for resolving file names and this is an example in which the current working directory could be my slash book and in that case you could actually cache the inode structure for slash my slash book in the kernel and thereby when you go to get to count it's much faster if your current working directory is slash my book okay and in keeping with the notion that everything's a cache in fact what we cache under some circumstance actually what the operating systems do cache is they cache names and so slash my slash book is actually kept the uh the book pointers actually kept in an internal name cache uh which gets a little pretty close to um the question that was just asked about keeping a path in a big hash table so um so if you think about the hash table as a cache rather than as the um the ground truth on the directory then that kind of works the way i think you were thinking there now um so our in-memory data structures uh here's the the per process uh file table which takes a file descriptor number and that looks up the in the file description and that file description which is typically system wide you load the inode into it and it points at data blocks okay and so once we pull the inode into memory then we can read the various blocks in the file pretty quickly and we don't care where the actual file name is okay so the open system system call basically finds the inode on the disk from the path name by traversing all the directories creates an in-memory inode and from that point on then access to the file is fast and it's independent of how long the path name is one entry in this table no matter how many instances of the file are open so if this file is opened by many people there's only one description here with many different file descriptors pointing at it okay now if you rename or move a file does it create a new inode or modify the existing inode neither what it does when you move the file is it just changes uh the directory structure it's the same eye node it's unchanged so the inode is the file in some sense and you can move it around but all you're doing there is you're changing who's pointing at that inode okay i hope that answered that question and in fact if the same file is in several different directories then you can have several different directories point at the inode and uh that just all works out okay and so this is part of why the inode is the thing that we want to store our permission bits on as well okay now of course the first file system we're going to talk about it's a fat file system and it violates a whole bunch of these things but it's probably the most common file system in use today so we're going to start with that one so read and write system calls look up in memory inode using the file handle and so once we've opened then everything is fast okay so the last thing i want to do before we look at some case studies is let's see if we can understand what our characteristics of our files are in order to help us design our file system and so there have been many things studied over the years here's one that was published in fast which is a file system conference 2007. and one of the observations was really that most files are small so what they did was they tracked the size of files in the file system over the years and starting in 2005 years worth of data and what you see here is that um most of the files are in this um small range here even though there are some long tails okay and so most files are small says that i need to optimize for small that's like 2k or less files but most of the bytes are in the large files okay so if you look at how many bytes are total in the file versus how much of the space it uses up what you find is that most of the space on the file system is used by the large files even though there's a lot of small files so there's a lot of small files but most of the bytes are in the large files so what these two pieces of data show you and the trends of course are that files keep getting bigger and so on but what these two pieces of data show you is that one we need to be extremely efficient with small files and two we need to support large files still because those are very important so we can't really focus on just small files or large files we want to have something that does both well okay and we're going to keep that in mind because that's going to tell us a little bit about why various operating systems design their file systems the way they did so the first one i want to show you is the most common file system in the world i would say this is the one that you have on your cameras that you when you uh take uh and plug in a usb stick it's a fat file system and so on this was the original ms-dos file system and it has found its way through many iterations and sizes to ridiculously large flash drives okay and so this is a good one to know because it kind of lets us see the simplest form of how we can build a file system and so the simple idea fat stands for file allocation table and what a file allocation table is is it's just a big table of integers okay and you can think of it as sitting next to the disk blocks okay and that big um table of integers is one-to-one correspondence with all the data blocks so there is a entry zero in the fat corresponds to uh disk block zero entry one corresponds to disk block one and so on so you could almost think of the fat file system as being a one integer worth of metadata per block okay and this fat directory or this fat index is basically going to be stored on the disk in a few disc blocks and it's actually replicated for uh reliability reasons and let's see how we can build a file system out of it okay so assume for now that we have a way to translate a path so that means a full name into a file number okay so um let's assume we have a directory and i'll show you how that works in a moment well then uh disk storage is just a bunch of disk blocks so so what's a file well a file is a bunch of disk blocks how do you figure out which disk blocks they are well we're going to somehow link them together in a in a linear order so that we've got a file out of them and you could think that each block holds file data okay so it's you could think of it it's block number x of the fi or block b of the file offset x gives you if we have say four cable byte blocks gives you um which of the 4k byte bytes we're interested in okay and there'll be n blocks and so if we put a bunch of blocks together block 0 will be some disk block then there'll be block 1 block 2 and then we can figure out which block we need and then inside it which index we need to get the byte we want okay so for instance suppose that we're talking about a file and i'm going to call it file 31 block 0 file 31 block 1 file 31 block 2. so what i've just assumed here is that somehow our our files are numbered and each file has a set of blocks 0 1 2 and notice that they're spread potentially all over the disk okay so this is a potentially big problem with the fat file system and so suppose now uh so what are b so b is the block number and x is the offset okay and so so in this block here if i were interested suppose i were interested in getting byte five of the file i would know that that's block zero because the blocks are say 4k in size so it'd be block 0 byte 5 and so that would mean i'd go to this and i'd go to block 0 and i'd find the fifth block in that and that would give me the byte that i wanted okay does that help so b is a file number there's a block number here okay now let's suppose we want to read from file 31 block 2 some offset x what do we do well we have to index and find block 2 which is down here so how do we know what block 2 is of file 31 well fat does this extremely simply okay all it does is we start with entry 31 is the file number and so that means that the file number corresponds directly to whatever block this is block 31 represents block zero of the disk okay so the 31st disk block is block 0 of file 31. okay and then what does the fat do the fat is a set of pointers that say well from block 31 or from this potentially uh spot in the fat file system the next block is what this link points to so this 31 is going to have a 32 in it because block 32 in the disk is the next block of the file and then down here i don't know what block number this is doesn't really matter is block 3 of the file and so basically i can walk through the blocks of the file by starting at the the head block which is the file number and then just following the pointers and that gives me block zero block one block two okay and so the way i read the block from the disk is i wanted block two do uh two hops and then i pull the whole block in and at that point now i can read block byte x out of that block and hand it back to the user okay questions now if you read in the literature what you'll find is there's many versions of the fat file system there was one that was 12 bits one that was 16 bits one that's 32 bits okay that talks about the size of the integer in each one of these slots which has to do with disc block the number of disc blocks on the disc so you can imagine that fat32 has many more much larger discs it can handle than the original fat 12. now a very interesting question here that's in the chat is what if you want file 32 the answer is there is no file 32 because file 32 would put you in the middle of file 31. okay so not every file number corresponds to the beginning of a file okay so let me say that again so file 32 isn't a file okay file 31 is a file block 32 it turns out is the second block of that file how do i know that well i have to keep track of where my my files start that's where the directory is going to come into play if i thought 32 was a file and i popped in there what i'm going to find is that file is going to look funny because it's going to be missing the first block and if this is say a video and there's a certain encoding in it i'm going to not be able to properly encode it because i'm jumping into the middle of that file okay so you can start to see the ways in which a fat file system can get really screwed up like if i lose track of where all the file numbers in are then it's going to be very hard to figure out uh where the starts of all the files are now there are recovery programs that will go through and try to figure out that oh look here's this block this block and this block and they look like they're block 0 1 and 2 of a video file therefore i'm going to call this a file and i'm going to generate a new fat for you that will let you read it as a file but it's a very error prone process okay now how do we let's look at this so the file is a collection of disk blocks the fat is a linked list one to one with the blocks okay the file number is uh the index of the root of the block list for the file um the question that's an interesting one is do they always go down no in fact that's going to depend a lot on if you read some files and you write some files and you delete some files and you read some files and you write some files and you delete some files and you iterate days months years it's going to matter what blocks are free and so you could you could link all over the place so there is no locality in the fat file system especially after you've used it for a while so that in fact the disk head is going to be going all over the place as you try to read linearly through a file so you can already see this has got a problem here okay why is this used in cameras usbs well it's uh it was the lowest common denominator they wanted something that could work in the original ms-dos slash windows boxes and so on and so pretty much it's just historical reasons fat is the thing used okay um so i and that may be an unsatisfying answer but that's the reason so the offset and the file is a block number and an offset within the block you follow the list to get the block number unused blocks are marked free so what does that mean that means the fat has a special entry that isn't a link to another fat entry that just says i'm free okay and so when you need a new block you can scan through the fat uh to find ones that are marked as free and those are ones that you can use okay and so let me give you an example here so um suppose that i wanna uh let's see i guess i had a duplicate here okay suppose that i wanted to um do a write okay so actually before i do that i want to show you something else here so let's take a look at two files okay so here's an example with two files file 31 and file uh whatever file number two is i have no idea what that number is doesn't really matter but it's got two blocks in it file 31's got three blocks and notice that i've essentially written here another block into file 31 and so you can kind of see how these pointers can get all scrambled now the question might be where is this uh fat stored well it's stored on disk okay at the beginning and there's a special entry here that marks things as free and the question might be what's the quickest way to format well you could mark all the fat entries is free that's a quick format so-called and in that case it doesn't really delete the data what it does is it removes all the indexes and so uh if you do a quick format and you do you know a directory you do a list in the directory it's really a deer dir in in windows or whatever uh you'll think that things are gone but in fact all the data is underlying because all you've done is erased all the indexes and somebody with a file recovery program might be able to still look at it okay so um one of the good things about fat is that it's simple you can basically implement it in device firmware and so that's one of the reasons that it's also used uh in cameras and so on because it's really simple to implement it doesn't require a lot of work okay is the free list kept as a linked list uh technically the free list in the fats spec is really just zero entries here if you wanted to have a linked list you could do that in memory as a way of avoiding having to scan all the way through and sometimes if you have enough memory what most devices will do is they'll just load the whole fat into memory so it's much quicker to go through okay but technically speaking if you took something and removed the usb key and you look at the fat things are indicated as free by being zero okay so let's look at directories for a moment here so a directory in fat is a file containing a file name file number mappings okay and so here's an example where um we might have the name music and it has a pointer uh in it to the file number for that uh music directory notice that there's typically the dot which is the pointing to this guy and then the dot dot which is pointing to the parent um we link these directory entries together why are they linked together well just because in the fat things are all linked right so this is a the first very clear instance hopefully for you guys of a directory is just a file that's got special formatting okay now uh the interesting question that was on this is what if the sector the root directory fails then you potentially lose your data now the fat there's actually two copies of it so you have a couple of chances to not lose it but if you really lose the fat then you've just lost all of the indexes and potentially have no idea what files are linked together so free space for new or deleted entries is kept so when you delete something in a directory you just link over it and there's free space in that directory in the fat the file attributes are kept in the directory which means unlike what i was saying earlier that we're not able to [Music] put permissions on the file itself but rather on the directory so that's not quite the way we wanted it so what distinguishes directory files from normal files you can get to them by starting at the root directory okay so all of this makes sure it depends a lot on the actual format of the the metadata not getting screwed up and so any of you who have ever lost i once lost a whole bunch of pictures in a camera because a couple of blocks failed in the wrong way and it's very hard to get them back so the fat file system is very fragile as you can see but again it's used uh a lot in very large usb keys okay and it's a linked list of entries you have to linearly search through and so on so where do you find the root directory just to circle back on that it's at a well defined place on disk in the case of fat this is block 32 uh excuse me block two there are no block zero or one don't ask me why that's just what they did so pretty much the very first block on the disk is the primary fat and that's where you start your lookup okay good so discussion suppose you start with a file number time how much how long does it take to find a block well it's linear right you have to linearly search your way through what's the block layout for a file well the the layout for a file is accidentally whatever happens to be uh used as your writing and wherever the free blocks are what about sequential access well sequential access is slow because you have to work your way through uh pointer point or pointer pointer so um you know i guess if going from pointer to pointer is not too bad then your sequential access is not too bad random access is pretty bad right so if i wanted to get to block three from bla from uh file 31 the old my only thing i can do is work my way through all of these links until i get to block three okay and so the fat file system is very bad for random access unless you have a driver file system that pulls the whole fat in and re-indexes it in a way that's fast and you can do that there's nothing no reason not to other than it takes a lot of memory and is not simple which is one of the reasons that people like to use fats in camera because it's such a simple thing what about fragmentation that's where the file is split across many parts of the disk well as you see just plain happens uh this is why there are all these defrag routines that you can run on old windows boxes and so on to rearrange the blocks so that you really are linking sequentially and you can get some sequential performance out of this but um if you don't do that then the blocks are potentially all over the place small files yeah it handles them well enough right big files well there's a lot of links i mean the biggest problem with a big file is you uh you can't get randomly to the end of it without following a bunch of links so that's a that's a bit of an issue okay so let's look at a different case study so i want to talk about the berkeley file fast file system okay so i know it's a new unix including the berkeley fast file system so the file number is no longer just a pointer into something like the fat it's actually an index into a set of inode arrays and so those i know to raise each file or directory is in tonight is an inode okay and so the file number is an index into this array each inode corresponds to a single file and contains all its metadata so the things like the read or write permissions are stored with the file not in the directory like they were in the phat system it allows multiple names or directory entries for a file so again the idea there is the inode is the file the directory entries can point at it you can name that file 12 12 different ways as long as you get to that through the directory structure you can now use the same file because it's its identity is defined by the inode okay so this is this is a much cleaner approach to to dealing with files okay so the inode in unix typically maintains a multi-level tree structure i'll show you this in a second to find storage blocks for files and it's been designed in this asymmetric way which you'll see in a moment to make it great for little and large files okay if you remember i showed you that there's a huge number of little files but some really big files and we need to handle both of those well okay so the original inode format which i'm going to show you appeared in berkeley standard distribution unix 4.1 and you know i've said this a couple of times i said this with sockets you know bsd berkeley standard distribution was famous for all sorts of innovations and operating systems so this is a you know go bears kind of scenario this is part of your heritage here and just as a more recent thing this is very similar structure for what uh linux ext2 or three ended up ext3 is pretty much what you would get if you formatted you know a new version of linux and you weren't trying to make a huge system for ext4 okay so go bears so here's the inode structure typically it looks like this where an inode has a bunch of metadata and then it has a bunch of what are called direct pointers which are pointers directly to block numbers okay and so the block numbers remember i talked about the logical block numbers earlier point uh in a big space from one to n and so the direct pointers point directly at a set of blocks and then there are double indirect pointers which is this is showing you an indirect pointer here for instance points at a block and inside that block is a bunch of pointers to blocks and then doubly indirect pointers pointed at a block which points at blocks which point at a bunch of data blocks and then finally a triple uh tripoli indirect goes to a block which points to a block which points to a block which points to a bunch of data blocks so all of the data blocks are over here on the far right and this index structure notice how it's asymmetric so the first n direct pointers you go you have the inode you can directly figure out which data blocks are there and then if you go past let's say block 10 then you start having to pull in a block which they'll let then let you get n blocks out of it okay can anybody figure out why we did a bunch of direct pointers and then we had some indirect doubly indirect and tripoli indirect pointers why why this crazy structure any idea something about small versus large files yeah what does this do uh exactly good so somebody else said here the head of the file is fast but you can still accommodate uh large files that's correct in fact for files that are small enough it's only one hop once we've got the inode in memory which we get on open we can look directly in the inode to find out the first n blocks just directly so this is extremely efficient for small files right but we can accommodate large files and for really large files the tripoli and direct pointers give us a huge number of data blocks okay and so this structure was set up precisely to handle uh small files really well and still be able to handle big files mostly well fairly well and if you imagine caching we haven't talked about that yet all of these intermediate blocks then in fact once you've gone to the trouble of doing the tripoli indirect blocks and you pull in the first triple indirect block and then the doubles and the indirect locks then uh these can be put in the cache and you can get the rest of these very fast okay now the question is to clarify does the file number point to a single inode or to an array of multiply nodes the answer is the file number points to a single inode okay so the file is defined by its inode each file has only one inode and when you talk about an i number it is an index into this inode array that points at where the inode is does that make sense all right are we good or more clarification so the i number points at the inode array every file has one inode okay is the number of direct pointers part of the spec yes so file systems typically have a specific inode format so that's part of the file system okay so it's and um you don't often have the option to vary it in fact i'm not sure of a a commonly used file system off the hand off hand that lets you change the number of direct pointers um inodes are one to one for every file yes so each file has ni node each inode that is in use belongs to a file typically there's a whole bunch of these that are free because if there weren't any free ones you couldn't create any new files so there's a bunch of free ones but for the ones that are in use they're only being used by one file and each file has one inode and the file number is unique to the file yes i don't know if i can say this in any other way does this make sense what we're looking at here is exactly one file okay and it has exactly one eye number which represents this spot okay do i should i pause on this are we good okay i'm assuming that it's seeming that we're good the inode array does not include a pointer to an inode the inode array has inodes in it the i number is a pointer in the inode array okay so you could think of this as an array of structs if you want but it's on disk okay so the inodes are actually in the inode array which is stored on disk now the the top of the inode is the file attributes which are things like what user is it created it what group is it in the typical read write execute permissions of you know the user group and world things like the set uid and set gid bits which say that whenever you try to execute this file if it's an executable does it get an effective user id that is the same as the owner or an effective group id that's the same as the group those bits are all stored in the metadata okay whether this can be read or written etc okay the other thing is for instance here's an example of 12 pointers okay this wasn't the original bsd necessarily but certainly linux has 12 of these direct pointers the original bsd had 10. that's part of the spec but um what this is saying is that in this inode we have for instance 12 pointers that point at data blocks and if it's 4k blocks that means that the direct pointers are sufficient for files up to 48 kilobytes everybody with me why because we have 12 pointers 12 times 4 kilobyte four kilobytes okay so we can do pretty well with lots of small files having only one uh lookup hop or one indirect in direction to get to the data blocks once we've loaded node into ram then we can get these data blocks okay and again that's getting us this thing that we talked about earlier which is that most of the files are small so most of the inodes don't have any indirect w indirect or tripoli indirect pointers those are zeros they basically have everything in this small number of direct pointers okay if you uh so does the file system not support 512 byte box okay so that's an interesting question and the answer is uh that originally these blocks were small and they were 512 bytes in the original bsd okay because the sector sizes were 512 bytes when we got to the fast file system which um i want to finish up here before we end today um these blocks were bigger and so then there was a special way to deal with fragmentation where you'd have uh data blocks were partially used but let's leave that for another conversation um so and by the way just to finish one thing though is if the sectors are 512 bytes on disk when we read data blocks and with their four kilobytes we don't nothing that the file system the file system has no idea that the disk is operating at 512 bytes because the device driver only pulls in and out 4k bytes so there's never that level of granularity is never exposed to this file system okay now um so once we get to the indirect pointers um we can actually get up to terabytes of data okay so once we get to these this level up here we're in pretty good shape and that basically handles um the really large files in our original study there so we're good to go with one two and three level indirect pointers okay putting it all together we basically have it on disk index where we have these inode arrays with a bunch of inodes in them that index files for us and in the case of the original unix it was 10 direct pointers and we can sort of ask our question how many axis is for block 23 well what you do is you you get through the direct ones and then you start talking about the uh um two of them because you have to get uh one for the indirect block so if we have ten direct pointers in this example um to get to block 23 we have to basically get past the first 10 and then we know that the block 23 is going to be uh in this uh in the set of data blocks that are singularly indirect so we're going to have to read this indirect block and then we'll be able to get the uh go down to block 13 in this grouping to get block 23. okay and so um and actually if it's zero indexed we'll go to uh block 12 here to get to the one we want but notice that we can easily figure it out if we know what block we're interested in we can figure out where in this structure we have to go to get our block so this inode being well defined based on the file system means that we can easily go from block number uh to which where the data block is all right so how about block five well that's just a direct block so we just do one read how about block 340 well it turns out we have to go down to the doubly indirect blocks at that point here read this guy this guy and so on so you guys can figure that out all right now if you guys will give me another few moments here i want to actually talk about the fast file system so so far we're really talking about berkeley uh bsd 4.1 but uh as you can imagine if you look at this data structure there's nothing in this data structure that says these data blocks are laid out in any intelligent way on disk in fact the original berkeley unix 4.1 bsd file system had this unfortunate property that it would start out really fast why is that well because as i allocated new files i'd lay out all my blocks on disk in a sequential order and reading them back would be fast but over time as you read and wrote and read and wrote and deleted what would happen is the file system would get more and more slow over time progressively slower until it was half or worse uh of the original performance and the reason is that these blocks would start becoming randomly scattered on the disk because the free list in the original bsd was literally a linked list and had no idea of locality on disk okay so you can imagine that's a problem so what did they do well um to and this is basically we got to deal with performance okay and so what happened is among other things we got to go back to this from last time two times ago if we want to optimize reading on a disk we remember that the seek time and the plus the rotational latency plus the transfer time all add up to give me my total time and this seek and rotational latency can be long especially the seek time and so the seek time we would like to avoid as much as possible so if you are reading sequentially say here and you're reading through this because this was a video or whatever what i'd like is as i read successive blocks it'd be great if they all were on the same track because then i could get them really rapidly well that can only happen if i'm my file system is conscious of that and tries to figure out how to lay them out in a way that mostly means that sequential access either stays on the same track or if it has to change uh tracks or cylinders will go to an adjoining cylinder with a little tiny head moment rather than going all over the place okay and so we're going to try to optimize so that we first read from the same track then from the same cylinder neither of those require us to move the head and then only from tracks that are adjacent okay and so the fast file system which is bsd 4.2 1984 had the same inode structure so from the standpoint of what kind of files are supported they basically kept that same idea that we just showed you of really efficient small files um but the ability to support the large ones one of the things they did do is they went from a block size of 512 to 1024 so they doubled the block size and that immediately gave them a lot more sequential movement okay because we could um read successive blocks very quickly um we could read basically twice as many bytes at a time okay so that was good so the paper on the fast file system is up there on the resources page for you guys take a look and again this is a you know berkeley project was well known at the time and it did a bunch of optimizations for performance and reliability among other things distributing inodes rather than having a single inode array that was on the outer tracks of the disk or outer cylinders of the disk it actually distributed them throughout it used bitmap allocation in the place of a free list so the nice thing about a bitmap is now you have sort of one spot for every sector and now you can make a decision you can say oh look there's a big range uh an empty space on the disk with a big range of uh free blocks that i could allocate uh okay and so the free list given a much better idea of what was uh sequentially free and not another trick that they did was they kept ten percent of the disk space free and that probabilistically gave them a lot of runs of empty space which gave them a much better ability to read sequentially off the disk okay so um in europe so in the early days which we were talking about here early unix and dos windows the fat file system etc basically put all the headers on the outer most cylinders and two problems with that are one since the inodes are all in one place if the head crash destroyed the disk you just destroyed all your inodes and now you lost track of all of the places of your files so that's a problem right problem number two was when you create a file you don't really know how big it'll uh become and so the question is how do you allocate sequentially enough space to get good performance and we'll talk about that next time since we've just run out of time here but just to give you a little bit of things to think about on our way out is that they basically divided the group into divided the disk itself into a bunch of block groups and distributed the inodes around in the groups and made came up with a way of basically allocating uh files sequentially within a group and uh given the heuristics for doing that they actually improve the performance of this quite a bit we'll talk about that next time for now um just in conclusion we've been talking about file systems about transforming blocks into files and directories optimizing for access and usage patterns maximizing sequential access and allowing very efficient random access we talked about file and directories being defined by a header called inode we talked about naming which is translating from user visible names to actual system resources the directories are used for naming and a linked or tree structure stored in the files and that's how we basically define which blocks belong in a file we talked about the phat scheme which is very widely used it's a linked list approach cameras usb drives sd cards etc they're all usb it's very simple to implement in firmware but very poor performance and it basically has no security as you can see we want to look at the actual file access patterns lots of small files but a few really big ones taking up all the space and so next time we'll talk about uh laying out file systems to take advantage of that including the fast file system all right and then we'll talk about a couple of others i have two other file systems that we'll talk about very briefly at the beginning in the next lecture including ntfs which is the windows file system and f2fs which is one that's optimized for flash all right so um i think that's when we're going to call it a night uh i hope everybody has a great weekend and uh we're going to not try to get too crazy watching the file the vote counts coming in but otherwise i hope you all have a wonderful weekend and we'll see you on monday um it's the bsd file system and still in use that's a question on the chat the answer is yes uh in uh it's definitely still in use in bsd unix and uh linux ext23 is also essentially the bsd file system so all right you guys have a a great evening bye now |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_25_Distributed_Storage_NFS_and_AFS_Key_Value_Stores.txt | okay welcome back everybody to uh the last i guess official lecture before the end of the term we're going to have another one on wednesday which is going to be a special topics lecture but um i'd like to continue where we left off we were talking about uh distributed storage and um if you remember before we got into that topic we were talking about the remote procedure call idea and the idea behind a remote procedure call is really that a client can link with a library that includes a bunch of stubs that allow it to essentially make function calls which go all the way across the network to a server machine with the return coming back and they can deal with them just as they would a local function okay and so that's a remote procedure call we're making a procedure called remotely and some of the key ideas we talked about were the fact that the arguments to these procedures have to be packaged up uh by the client stub and they're packaged up in a network independent way and uh serialized as a ser of bytes excuse me as a set of bytes and then they're sent across the network where they're unpacked and the server stub will then call a server function with the deserialized versions of those arguments and then the return call will get serialized again sent across the network and uh received and it'll be returned into the client as a return from a function call and so the client can therefore use this regardless of the fact that it's remote and a couple of things that we talked about were among other things how these stubs get generated uh there's a special idl language that you um use to describe the procedure calls and a compiler that generates the stubs for the client and server side and you can basically have the server be remote of course or local and the client doesn't have to know the difference other than a difference in performance okay now today we're going to actually show you an example of uh use of rpc which is pretty common which is to make it an actual remote file system work okay before i pass on from this are there any questions all right so then the other thing we talked about is we talked about the cap theorem the consistency availability partition tolerance theorem which uh really was more like a conjecture by eric brewer back in the early 2000s but uh it has since been proved in various ways and the basic idea is that you can have uh two out of these three you can't have all three so you could have consistency availability uh partition tolerance you can pick two of any of those three and um the thing that keep in mind here is basically consistency means that when you change the file system on one side everybody sees those changes consistently availability means that you always have the ability to access the file system and partition tolerance says that the network can survive being cut in half okay ah so before i guess we have a late uh question here about rpc which is which is fine the question here is if the client sends pointers as arguments does the client stub have to load all of that in so pointers basically don't mean anything cross machine so part of that serialization has to actually be taking uh any data structures that are consisting of pointers and serializing them uh into a complete set of bytes to send across there are sometimes a specialized ways of packaging up opaque pointer references and sending them off to a server but the server doesn't know what to do with them they would only be for returning back later to the client so i think the short answer the question is yeah if you have any structures made out of pointers they have to be serialized into bytes before they're sent across otherwise they don't mean anything okay so this cap theorem uh by the way just to finish that thought here will have an impact on pretty much any remote storage that we might have to deal with and it certainly comes into play when we start talking about cache consistency of the the file system okay all right are there any questions on the cap theorem all right so um so let's talk about distributed file systems then so as you can see the idea here uh behind this figure is really the idea that the storage is going to be in the network somewhere or we today we call it the cloud i guess and you can use that storage no matter where you are you could be here at berkeley on the left coast you could be in boston on the right coast or in beijing whatever and you can still use the data and in some file systems you can even use the data while you're driving uh from one coast to the other and that's all because it's in the middle but once you start having things remote in the middle here then you start running into the cap theorem so what is a distributed file system well it's pretty simple you've all used this uh many times but we have a laptop here and a server that's actually got the data and so instead of the file systems like we've been talking about the last uh several weeks which are local in this case you're actually sending your request to read and the response is coming back over the network and the server is actually a separate node somewhere else from the client that's using it okay um so a question that we might have here uh which is in the chat is is a solution to make the network resistant to partitions so it doesn't have to be partition tolerant so the problem with that is pretty much that we run up against the end to end theorem which really is how much work do you want to put in the middle of the network to make it so that it never partitions and in practice uh you can add a lot of redundancy to the network you can have many alternate paths that can be taken but uh ultimately it gets very hard to prevent there uh never being a partition in the network but you know you can add a lot of redundancy so if you don't take the path straight from uh berkeley to boston going straight through maybe you go by way of alaska and down if you have enough alternative paths you can sometimes make the probability of partitions very low so what we really want with a distributed file system is this idea of transparent access to files on the remote disk so that the client doesn't have to know that this is re remote from the standpoint of the way you interact with it the only know the only way you notice is that things are slower okay and so one of the things that we have as a concept here is the notion of mounting a remote file system onto the local file system and so here's an instance where we actually have the local route that's a little slash up here and uh slash users and then slash users jane we've actually mounted uh another file system on the the server called kuby um and uh the partition jane is at this point in the mount and then um inside that jade file system there's another directory called program prog and that we've mounted a different partition from kuby to prog okay and so what happens there is that the laptop user says slash users jane slash prog dot c in reality because of the way we've mounted this it's really in the slash prog partition on the kuby file system and uh it's the file fu.c and so by mounting we can essentially get transparency against these the fact that these are actually remote so the local user doesn't have to know the difference so that's a form of transparency that we get but with the mount system call okay now of course that raises all sorts of questions which uh we don't have a a lot of time left in the term to answer uh but uh one naming choice which you see pretty clearly here in this figure on the right is that every file in principle is a tuple of a hostname and local name in the file system and uh we basically everywhere below in the operating system we always talk about files as a tuple of hostname and local name so this is the simplest thing to do and it's what we often do uh for instance in the department etc it's fine except that it doesn't give you a lot of opportunity to move files around to load balance or to try to deal with failures it does let you do dns remapping so if kubi the file server went down i could change its ip address to point to a different server and then i'd still be up and working another alternative though which is much more interesting uh in the grand scheme of things might be a global namespace where every file name is somehow unique in the world and there have been several instances of that over the over time today we'll talk a little bit about one which can be based on hashes over the name okay so let's talk about what's involved in making a remote file system work so we we've talked a lot over several lectures about how to make local file systems work but what about remote ones okay so somehow the device driver and i'm going to put that in air quotes here talking to the disk is got the network involved so that's a little bit strange already right because we think of device drivers as going uh from the operating system down into a controller and to the local disk but instead we're going from the files we're going from the system call interface into the network and then over to a different server and then going into the device driver and so we need some abstractions to let us do that and so one of the abstractions is one called vfs okay so this is used it was originally the virtual file system and uh then in linux it became the virtual file system switch i'll show you why switch kind of makes more sense maybe but um if you take a look at what i've circled here in our kernel the file systems actually go through a layer of handling files and directories which is called the vfs right there and below vfs is potentially many file system types some of which are over the network okay and so some of these file system types might actually then interact with the network subsystem go out come back through a different network subsystem on the other side and then back into the file system and down to the block devices okay so this vfs is going to be an enabling abstraction that's going to allow us to mount file systems first of all of many types and then second of all including things that are across the network okay so what exactly are we talking about here so if you remember in our layers of io we talked about you do a read uh system call it takes you into the kernel read which uh or into the lib c version of read which does a system call takes us into the system call processing and then if you look down here this is by the way a slide from lecture 10 or whatever if you look inside we actually have something called vfs read which gets called from the higher layers and ultimately from the user vfs being virtual file system okay and so inside that call is going to be interacting with the vfs layer and this vfs layer you can kind of think of this way so you got the the client process at top comes through the vfs layer and depending on which part of the directory we happen to go to remember the mounting we could be going into a ext2 or three file system um kind of like bsd or we could go to ms-dos fat file system and either of those could be used in the same way by the client okay so this idea of you know opening slash floppy slash test and then writing to slash temp slash test what i'm actually doing in this loop here is i'm reading from an ms-dos file system writing to a unix file system and this all works because of the abstraction of vfs okay so that's pretty good right so how does that work so the vfs layer is is like a local file system without any of the disks involved and it's really just a set of hooks that allow you to plug in functionality that's needed for the client to act with a file system okay and it's compil compatible with all sorts of local and remote file systems and it basically allows the same system call interface above regardless of the file system now we won't go in this in great detail but for instance you could you could look up vfs and linux and it would tell you that this is a an interface with four primary objects there's a superblock object an inode object a directory entry object and a file object that represent all of these things that we talked about pretty much when we talked about unix file systems what's interesting about this though is what depending on what file system you plug in it may not even have an inode object think about the fat file system right there's no inode there so really um what happens is this vfs layer gives the underlying connector the ability to fake something that looks like a unix file systems you can make it look like directories are made out of files even if they're not you can make it look like their inodes and superblocks and so on regardless of whether those things are really in the underlying file system and so that layer sometimes we call that a shim layer basically allows you to plug in things that then the vfs layer can make look like file systems okay so um i'm going to talk to you about nfs which is in some sense the first user of vfs back when it was the virtual file system and so but this has persisted you know to this day so it's persisted for the last 20 years 25 years so so let's talk about a simple distributed file system in a little more detail here so first of all we talked about rpc so we're going to be making procedure calls so the client when they need to do a read what happens is the read goes into the vfs layer the vfs layer then could just go ahead and make a remote procedure call to the server that's out then talks to the disk and gives you the blocks back and returns the data and so we could have a whole bunch of these uh round trips and because this is rpc we could even not care about the the endian-ness of the client versus the server because basically the client can call a procedure on the server and it just works okay and so um this is kind of the first way that people build file systems um you know you're using the remote procedure calls to translate things but there's no local caching in the client just in the server so the advantage of this is it's the server is providing a consistent view of the file system like it does now if you were running processes on the server so that's good the downside here is it's really not performant okay it's expensive to go across the network even when you're in a local you know even when you're in the local network where it's going to cost you a millisecond to go round trip and much worse if you happen to have to go uh to the metropolitan area or globally where you're talking 10 milliseconds 100 milliseconds that adds up really quickly for every block read okay and so this is fine from an abstraction you know gee we could build this throw it together really quickly this is really not going to work well okay and there are actually ways of mounting a remote server with uh ssh for instance that kind of act like this okay where you just open a tunnel and you essentially get a mounted file system it's really not going to perform very well okay but you can do it um so obviously the thing to do is caching right so that's we've talked lots about caching remember everything in an operating system is a cache you can quote kuby on that if there's nothing else you uh get out of this class you can you can quote me on that so what we're going to do is we're going to put caches in the system at the client side in addition to the server side so the server side cache is kind of easy because that's the buffer cache but we would like to for instance use the buffer cache on the client side and you know how does that work okay so the advantage of this is if you can somehow do the open read write close portion locally because maybe you cache credentials and information about some remote file this gets really fast right so the very first read to some file you reaches out you get an rpc across the network pulls it off the result off the disk puts it into local cache returns it gets in the uh excuse me puts it in the server cache returns gets into local cache and returns a result and so that read was slow the first time but boy these subsequent ones point going are very fast right and they just return the value that's in the cache so that sounds good uh but what are some problems with this right so one of them is failure so consider this idea here we have a writer on some different client they write some data in the cache and poof that machine crashes and notice what just happened we just lost data and that's because the data was cached on the client and never made it to the server and uh it's now you know gone to dev null so clearly the moment we start putting caches into the system we've got some data reliability issues we have to worry about and of course we could force ourselves to do an rpc with an acknowledgement back first and then uh return from the client so the client never gets back and okay until they know the data has been uh placed on the server so that seems like a simple fix because now if we crash we haven't actually lost the data right what are some other problems well something else that rears its ugly head which you probably can see here you know this first cache has got uh the first value in it the second cache has got the second value and so if client one reads they get v1 and if client 2 reads they get v2 and we have a serious cache consistency problem okay now um the question in in the chat is uh frankly the obvious one which is how the heck do you deal with this right so this is um so on this slide this seems like a a problem right so this is a problem now uh we're going to talk about some solutions to this but you could you could start imagining some of these like whenever you write you have to first invalidate uh all the other caches and then you get right to write and so when they go to read again they get the next one back right or you could say well a little bit of inconsistency is okay as long as i pull to get consistent data back all right you could send yeah you could have changes there are many options here um the first uh you know the way they say this is the uh the first step is to recognize you've got a problem okay and so um the other thing by the way to keep uh that i'll point out is if for every right you're always broadcasting the results uh potentially you're using network bandwidth and that may or may not be the right thing to do okay so what's good about uh the questions you're all are asking here is you got the right point of view um this is clearly an issue okay so let's talk we'll talk a little bit about what you can do okay but um before we get there let's talk a little bit more about dealing with failures so we kind of talked about maybe if you acknowledge all the rights you can uh you know save your data or whatever but what if in general the server crashes okay so in that instance this is not a client crashing this is a server and you might say can the client just wait until the server comes back and keep going um in many cases uh the client can't wait that long because who knows how long the server is going to take to reboot um and maybe changes that are in the server's cache uh but not on disk get lost okay so when we talked about for instance the buffer cache holding uncommitted results there is that window of time where the server might crash and the data isn't there so clearly we want to start with good journaling on the server side so that's something we already know how to do um but we need to uh yes so we'll probably assume that the server is doing its best uh to do some sort of journaling um raid whatever you take it they're gonna do it okay but um something that's a little more subtle here might be well what if there's shared state so think about this for a moment client doesn't open okay you by now you're you're experts in using the open system call and then it does a seek so it says well start me out at byte number 5003 and then it's going to do a read okay now the issue with that sequence is on the local ser on the local file system that just works right because you seek to byte 5003 and your next read starts there but in the case of a remote file system if you do that seek across the network and then the server crashes and then comes back up or something and now you do your read probably the wrong thing's going to happen okay so this idea of shared state where in this case we're sharing the state of our current uh pointer in the file between the client and the server that actually leads itself to some really weird failure modes okay a similar problem might be this what if the client goes ahead and removes the file but the server crashes before acknowledging maybe the client doesn't know whether the file was removed or not okay and if that removal was part of a cleanup process or who knows maybe it was part of a temporary build there could be all sorts of weird things that might happen because that file is actually still around even though the client thought it was deleted so one thing we can do is to change our thinking a little bit and try to make sure that all of our interactions with the server are stateless so a stateless protocol is basically one where all the information that we might need to service a request is included with the request okay and so you're all very familiar with this idea behind http because when you go to a website uh typically the state of your access is kept in cookies on the client side and all of those the important cookies get sent with every request and so as a result um the server doesn't have to hold on to any information okay so maybe in the case of a file system maybe what we do is instead of actually setting the setting the pointer to where we are by seeking maybe what we do is we pass under the covers this idea of uh well i'm at 503 please give me some bytes back okay and that would be a stateless protocol okay now um the question here might be would an alternative be that we could make a bunch of requests and then wait for a bunch of acts um and yes we could do that but once again we're starting to get into that uh weird uh generals paradox kind of position and um if we can just do everything statelessly it's much simpler uh we don't have to worry about what the server knows and what they don't so an even better adjunct to stateless protocol is adept in operations which say that not only is the protocol stateless but if i do the same operation multiple times it's okay because it'll just ignore the next couple of times so that would be the difference between a write that file system that appends and then i have to be sure i append only once or a write to a block in the file or block on disk which i can do as many times as i want and it'll eventually you know the data will get written and if i it's already been written and i write it again it won't change anything so that's an idempident operation okay um timeouts that happen expire without a reply and with an identity operation you can just retry okay so the the idea of stateless is very appealing um for many reasons like this and again http is a good example of a stateless protocol so the question might be can we make use of that and we will all right i'll tell you about nfs which is a stateless protocol by design okay i want to do a couple of administrivia things before we get there so our last midterm and again there's no final in this class keep that in mind is this thursday five to seven pm all material up to today is included there although we'll be focusing on the last third class we're assuming you don't necessarily forget everything from the beginning of the class we're going to assume that cameras and zoom screen sharing are in place and you know there's no excuse to not have this turned on so um you can lose points for not having the camera and screen sharing turned on um when the tas come talk to you about it uh we might remind you at the beginning but you it's really on your it's on you to make sure that that's all working okay and we're going to once again distribute links like we did last time because i think that worked pretty well there's going to be a review session tomorrow from 7 to 9. i didn't look tonight but i know that there is already a zoom link for this that's been put together and so it should be published on piazza just watch for that um lecture 26 which is uh wednesday won't be on the exam but it's going to be a fun lecture of topics of your choosing should you send them to me and so feel free to send me a couple of queries and i'll do what i can to get that in the lecture barring that i have some other things i'll talk about i'll talk a little bit about data capsules which i'm working on et cetera okay you can let me know just by send me email okay um that seems simplest right now okay uh oh the other thing is um as with last term hkn is virtual uh i'm gonna i was gonna post a video on how to make sure you get a chance to comment on the class we'll post that up on piazza but don't forget to do your hkn evaluations that's always useful and i think that's all the administrative i had unless anybody else had other questions okay i'm sure you'll all do very well on the midterm i'm offering you good wishes for that in advance and i'm gonna miss having our little time here uh every night in uh pacific time i don't know whatever time it is for you guys some of you are uh in very vastly different time zones i know okay so let's talk about the the network file system from sun micro systems um this was uh in the 80s this particular file system came out and was in pretty pretty widespread use still is in wide use there's three layers for this which you've already are now aware of the first two there's the unix file system layer which is the uh the system call layer open read write close file descriptors pointing at file descriptions you're all very aware of that the vfs layer is this layer i just introduced you to which distinguishes local from remote files purely by plugging in a table full of function functions that are called as a result of the system calls and then there's an nfs service layer which is the bottom layer that's the part that handles the nfs protocol does the rpc translates and serializes into a network independent form format okay the nfs protocol has xdr is the serialization protocol for for that rpc it was one of the first ones out there in fact uh nfs may have been one of the very first ones to have a network independent rpc layer it has operations for reading and searching the directories manipulating links accessing file attributes etc are all part of that protocol and that's across the network in the nfs service layer the other thing that it has and certainly the first version of nfs version 1.0 and 2.0 had this very visibly shown to the reader or the reader the user is right through caching which is modified data is committed to the server's disk before results are returned to the client that got relaxed a little bit over the years where the client might return while this caching's still going through from the buffer cache layer of client but by and large it's a it's a right through approach where the transactions aren't done until it's committed to the server side and so this can slow things down quite a bit under um various circumstances but you have that advantage of knowing that your data made it um servers are stateless so the protocol is a stateless protocol as we were discussing so reads include all the information for all the operations for instance so you know when you say you do read at uh i number position not read file name okay and that's so that we have all the information like for instance the current position i want to read at is included in the protocol okay and there really is no need to uh do an open close on the file across the network because uh the local client has enough information and every operation's satisfied on its own okay all of the operations are identity as i mentioned so you can perform requests multiple times and it gives you the same effect so examples are the server crashes between a disk i o and message send the client just resends it and the server does it again so that's fine you read write file blocks you just re-read or rewrite and there's no other side effects the interesting one here is what about remove so if you ask to remove a file from a directory and if s may do the operation twice if there wasn't an acknowledgement for some reason the second time there's just an advisory error that's returned back from the server saying well that file wasn't really there so this is the kind of adaptations the protocol to keep it stateless and identity okay the failure model for nfs is an interesting one as well it's also transparent to the client system uh in general so the idea originally was that when a server fails the client just freezes until the server comes back up and it just works okay and that was called a hard mount the problem with that is that servers would go down and then they would have all these processes that were reading writing files uh from an nfs partition and what would happen is they would all get stuck in the device driver and if you try to do a psa ux and see what's going on the processes you'd see all these processes that were all blocked with a little d and that was that they were hard blocked in the nfs driver waiting for the server to come back up and what's worse is that was a an unkillable state so you couldn't even kill them off they were just really jammed up so that's transparent but you might argue whether or not that's a good thing okay and there was actually a different type of nfs mount which uh is what everybody pretty much uses today which is called a soft mount and if you do if you do some man on the nfs clients and so on or do some googling on that you'll see about soft mounts the idea in a soft mount is that when the server goes down you actually just get an error that comes back and your read or write operation you're trying to do just fails now of course that failure is kind of weird because the client wasn't expecting it to fail by a server crashing because you're using the same interface you would with the local file system but at least it's not locked in a way that can't be killed off okay so here's a picture of the architecture as i mentioned so the client side we have the system call interface which takes you through vfs and then vfs has a whole bunch of different possible file systems that might be plugged in how do you know which one to go to well depending on what you mounted we showed you mount earlier the part of the file system you happen to be in tells you which of these branches which of these actual file systems you're going to use okay and if it happens to be a local one you'll use the local file system if it happens to be a remote mounted nfs file system you'll come off of vfs into the client the nfs client software which will take you down into the rpc xdr layer which will go across the network come back up into the nfs server layer which uh comes up into vfs which then um or or uses vfs to access the local file system okay and then the results get reversed to the back the other direction okay questions so if you notice at the remote side with nfs at least you're actually just using um a file system on the other side so um positive thing about this idea here is that if the server is disconnected from clients you can go through and evaluate the consistency of the file system and so on with all the normal tools because it just is a local file system to the server and then once it's operating as an nfs uh server which you get by starting up the nfs daemons then remote clients are able to access that file system on the server that way okay so that's pretty cool right works pretty well but let's talk a little bit about consistency of the caches so the nfs protocol is a weak consistent protocol by its nature so the client actually pulls the server periodically to check for changes and if the data hasn't been checked in the last 30 or 30 seconds three to 30 seconds it's settable to some extent then it pulls and asks the server what's the state of this particular block and when a file's changed on one client the server is notified but that isn't reflected back on other clients that happen to be caching it it's up to them to to poll and pull the changes okay so in this scenario that we had earlier where this second client writes um and you get an acknowledge back we can actually acknowledgement we can actually be in a situation where these two clients or at least over the short term are um inconsistent with each other but because of the way this polling works eventually uh this first client will get the new data okay so that's why we call it a weekly consistent weekly consistent protocol because the client kind of converges to the right contents of the cache so for instance is f1 still okay no here's a new value and at that point the client is good to go with the latest data so nice so um there have been various changes over the years that have made it less likely to notice this inconsistency clearly you don't want to be polling so frequently that you're using up a bunch of network bandwidth and in fact the polling uh is a hard limit to even regular simple polling not too frequently is a hard limit on the number of clients that can be connected to a server because every poll that comes in from a client is using up bandwidth on the server and so you know nfs clients can only be a limited number of them connected to a given server but if multiple clients write there is the there are these windows where things are a little bit out of uh consistent consistency inconsistent um and uh it is interesting you know when i first started using nfs many years years ago i did notice that uh you i would edit on one machine and i'd compile in another one and occasionally i'd save out some changes to a file and i would be so quick at going to compile um in a window to a different machine that i would occasionally get these really weird phantom errors which were because sort of part of my dot c file i had just saved out was intermixed with old versions of it because of the nfs consistency now this thing about why google can't handle hundreds of users simultaneously is some is not quite the same issue here because there there's polling that goes on and so at that point um you do have to worry about so there's some polling there's actual pushing of data going on in that case if you change too many things and there are too many clients then you're you're using a bandwidth going the other direction the problem with nfs is that even if nobody's changing anything you're polling all the time and that's using a bandwidth just while you're idle so at least in the google case you're you're using a bandwidth only when there are actual changes going on okay now um let's but let's explore this weak consistency for a little bit um because what sort of cache coherence might you expect from a system uh if you didn't know it was weakly consistent so suppose we have three clients and client we start with file contents has a in it let's just say and client one is reading at the very beginning by the way time is uh left to right here so client one starts reading and they're going to get a and client two starts reading well they're going to get a for part of the time but then if client one writes b at some point client two might start seeing b so there might be some intermixing of b or a and then client two might write c and you can get this situation where um transiently at least you're seeing parts of each file okay and so what would you actually want well one thing you might want is what if i want to have the same behavior as i would on a local file system okay and if you wanted that um so we have three processes instead of three clients then you might want to say if a read finishes before a write starts you always get the old copy if a read starts after the write finishes you always get the new copy and otherwise you get either copy and it turns out that this nfs polling protocol doesn't quite give you that semantic it gives you this a little bit less clean intermixing okay all right now i'm seeing some good uh combinations uh in the chat here thinking of different options between polling and pushing um i'm giving you that i'm gonna give you the pushing option in a section second here and we can ask some questions after that so for nfs rather than this somewhat cleaner view that we might expect from a local file system we really have this other idea where if a read starts more than 30 seconds or pick your polling time interval after a write you get the new copy otherwise you could get a partial update so that's more bandwidth efficient than it might be if we tried to make sure that every update was propagated to every client all the time but it does have that slightly weird semantic okay so the pros and cons of nfs is it's simple relatively so it's highly portable so they were one of the first ones to have the rpc with the serialization xdr protocol some cons though is it sometimes inconsistent in ways you can see and it doesn't scale very well the large number of clients because even in the idle case everybody's polling okay so let me tell you about another uh file system in this space so this one came later than nfs but not too much later so um i remember working with the androphile system afs in the late 80s and it became actually the dfs system ibm bought the file system at one point it was a commercial product the it had a callback mechanism instead of the polling so the idea is that this is no longer stateless by the way so we're removing the ability to be stateless but the server keeps track of every uh machine that has a copy of a file and whenever there's a change the server tells everybody with an old copy to invalidate their copy and as a result there's no polling bandwidth okay there's just invalidation bandwidth now notice the decision that was made here is not to push the changes out to everybody who needs them or who is using them but rather to invalidate and there's another interesting option here which is uh not option interesting semantic which afs did which is basically what i call right through on close so think about this a second andrew file system afs was really designed to work in a much more global environment than nfs in fact you could mount file systems uh that were served in other parts of the country you could actually mount them and use them locally and the performance was pretty good and the reason for that is this right through on close consistency which meant that when i open a file and i start modifying it none of my changes are propagated to anybody until i actually close the file even though i'm doing rights it's not until i do close and at that point my consistent version is now available for viewing by everybody else who's sharing the file okay and at that point also that's the point in which the notification goes out that um there's a new version of the file now in order to make this work uh there are two things to worry about one is that if i am have a file open and somebody else changes it i don't want it to be pulled out from under me so what happens there is when i open i actually see the version of the file from the moment i open it no matter what else anybody else is doing okay so i open the file they can be changing it like crazy but i will continue to see a snapshot of the file from the point i opened it until i close it and reopen it again okay so the upside of that is a very consistent view i always have a consistent snapshot of the file at the moment i opened it and when i write everybody always sees a consistent view of the written product so i know there's somebody worrying about race conditions in the chat we'll get to that in a second but if you think about that relative to this nfs version af-s gives you a much better set of semantics because you never see an inconsistent set of bytes in a file it's always a fully consistent set of bytes okay so that's that's an extremely positive thing and um when we notify others that the file has changed um they're either going to keep working with their consistent version or if they have it closed right now they'll get notified to throw their copy out and get the new copy and they'll see a completely new consistent version of what i've got okay now um a couple of things here so if you have lots of people writing they may not actually see each other's rights so um out of band you need a locking scheme or a notification scheme to say hey i'm working on the file right now why don't you wait for me okay so that's uh one thing you might worry about okay the second thing that's interesting about the android file system is rather than caching in memory okay which is what nfs does in the buffer cache and your file system actually caches on disk so the local disk becomes a cache on the file system so i can store whole files whenever i open a file the whole file is allowed to be brought from the server and put in my local disk and now i can access it as fast as i would if it were local because it really is local okay so um so the potential here is for much better caching because i'm using the local disk to cache and so you can have many many many more clients talking to a given server because the server isn't supporting every read and write what it's doing is it's helping with consistency management okay all right now um now there's a couple there are many questions here i think the the way to handle these questions is just to think through what we've got here right so when you open a file you get a snapshot of the file at the time you opened it and you'll hold on to that snapshot until you close okay and if you if you want to do the equivalent of seeing whether anything has changed you can close it and reopen it and you'll find out okay and if things never change or they they don't change for a long period of time because they're mostly read-only then as you use files they migrate to your local file system and now you got really fast action because now you open close read read read do a bunch of stuff close all of that is done purely locally because the file server is responsible for making sure that your locally cached copies of the files go away if they're no longer consistent okay and if you just want to read a small now good question what if you only want to read a small part about it of this file and it's a 20 gigabyte file okay i think that's the question that's being asked and that's a really good one so the original version of aafs actually you had to cache the whole file uh in the local file system later versions actually started caching in like 64k chunks or whatever and so there was a there were modifications that allowed you to have part of a file if the only thing you wanted to do was read a little bit of it and that took care of this performance problem that you're worried about here okay now um although i don't talk a lot about this um okay i just said that you know i said this hold on i'll say my other point in a second so data is cached on the local disk as well as in memories and on the right followed by a close you send a copy to the server which tells all the clients with copies to invalidate their local versions and that they'll need to fetch a new version from the server at that point if the loser now if the if the server crashes unlike with nfs we can't even conceive of a client transparent version of the protocol because the server is supposed to have all this callback state to keep track of who's got copies of things so when the server crashes it comes back up it actually has to request uh information from all of the clients that are connected as to what copies of what files they've got and so that's a little that's more expensive for rebooting the server okay so the pros uh relative to nfs much less server load disk is a cache so then technically the cache is much larger the callbacks means the server doesn't have to be involved if the files read only okay and so you can if you have mostly or totally read-only partitions you can have a small server basically share a huge set of clients because really all it's doing is helping the clients get copies of the data onto their local cache okay now for both af-s and nfs although afs is less problematic here the central server becomes a bottleneck and so the performance of all the writes ultimately go through the server and so there is a question about availability because the server becomes a single point of failure and the servers has to be more powerful than the clients and so it's typically a higher cost than a simple workstation okay now a good question is brought up here which is uh couldn't the server store um callback state on the disk and the answer is yes uh it probably has in fact as i recall it has a cache of what it used to know the server state was but who knows what happened uh when it crashed and came back up so it has to at minimum validate what the current state of the um of the caches are all right good so um one thing that's fun about the android file system which i didn't write down is the android file system had the notion of global names okay and so um if you were to look at a client machine you would see that there was a slash afs slash partition and then you could mount pretty much anything from anywhere in the world in a way that was independent uh was an independent name and as a result in principle every file in afs was globally available if you had the right permissions and so this is a little different than nfs where things are named by that tuple that i mentioned earlier which is a particular machine and a local file name here in principle at least there was global file names or at least it was starting to go that direction and so you'd mount you know we would be at mit and we would mount files that were down a cmu and ones that were over at berkeley and so on we could mount files that were on servers across the country and it actually worked pretty well because most of the performance was handled by the local disk and so this is an example of something where you really were starting to mount things very distantly okay and now of course you're all used to that with the cloud but um this was quite the innovation back when it first came out all right but let's move even further away and sort of ask uh you know what's this obsession that we have with files uh what about sharing data instead of files and one thing that's become very popular over the last decade and actually i would say last 15 years is this notion of a key value store where the world is like a big hash table that lets us look up keys and get values back okay and really back in the early 2000s when i started working on peer-to-peer storage systems key value stores were kind of in their early days okay so really this idea has been around for um you know more than 20 years it's just that it's become very prevalent over the last decade and it's native um you know pretty much in any programming language you got associative arrays in perl and dictionaries and python and maps and go and you pick your language there's a there's a hash table the key value store that we're going to be talking about is kind of like a hash table that spans the globe or spans the network and so um you know for everything you can imagine using a hash table for in these languages that you're aware of you can use a key value store for more globally okay and in terms of sharing information what about a collaborative key value store rather than message passing or file sharing so rather than thinking about taking file system mounting the file system onto clients and then sharing through files maybe we have a key value store and we just happen to know what the keys are that we're using and we share that way that seems like another option here and maybe we can have more uh more options on how to make things consistent and how to make them durable so we might ask ourselves could we make it scalable can we handle billions or trillions of keys can we make it reliable uh even though things are failing and the network's partitioning and so on uh can we always get in our data now i will tell you up front here we're not going to violate the cap theorem but what we can do is we can perhaps we can get to where the cap theorem doesn't bother us quite as much so we get an old value the key that's that's pretty close to recent maybe not the most recent one and maybe that's okay okay so the basic idea behind a key value store is a very simple interface okay there's put and get okay put has a key and a value and what it does is it inserts uh that value at that key into the key value store whatever that means you know it's it goes off into cyberspace somehow get takes the key and returns the value from cyberspace somehow okay so the interface is uh almost boringly simple the question is can we do something interesting with this that uh is scalable fault tolerant reliable durable put your all of your favorite nibbles in there can we make that happen out of this simple interface and the answer is this becomes much the answer is yes this becomes much simpler because the interface is so simple okay so why key value store okay i've already said this but it's easy to scale huge volumes of data petabytes okay exabytes you pick your number um big right uniform items you can distribute easily and roughly across many machines so if i have 10 machines versus 100 machines versus 1 000 machines i can just scale up the number of key value pairs i can handle and how many clients i can handle just by adding more things to the system okay and so that's that's kind of appealing if you think about a big nfs file server or a big afs file server or whatever your favorite thing is um the way you typically scale something like that up is you go and you buy a huge piece of hardware okay and that really big thing is fast because it's got a lot of really fast processors in a single box and it's really expensive uh on the other hand the way you might scale up a key value store is you just add more and more machines to it and just this incremental scalability gives you more power and so that's going to be another appeal of this idea okay so properties are pretty simple from a consistency standpoint because all we want is well we can talk about what types of consistency we might want but one simple thing is perhaps we just want to know what the latest value is associated with the key and there are many cases these days where this is a simpler but more scalable version of a database um and you can think of it as a building block uh for a more capable database if you want better semantics than just you know what's the latest value on something but oftentimes a key associated with a value is enough and you can call that a database okay good examples of this there are many so amazon you know key might be customer id value might be profile facebook twitter key might be the user id the value might be the user profile icloud or itunes the key might be a movie or song name the value might be movies or songs so there are many examples of keys and values that you use every day without actually thinking about it and by the way all of the big cloud companies all have really good key value stores that scale really well and people use all the time so in this case the good question that's that's in the chat there is so our keys kind of the same as global file names in afs yes roughly speaking okay so keys are um these global names that you could get at anywhere in the system and if you had us if you had a key value system that spanned the globe and everybody was using then the keys would be a global naming scheme now the thing that's a little tricky about that is keys if you if you just have a key that say your name uh the problem with that type of key is it's very clustered right so there are many people that have the first name john uh and so there would be a part of the key value space that's really overused and then there'd be lots of places where it's underused and so really what we use with keys when we want to really make this scalable is we start taking names that humans use and we hash them into a uniform set of bits like 256 bits that is the global name that these systems typically use and it's a hash over the human readable stuff so it's close okay it's a hash over human readable stuff okay but if you want to take the simple uh version of that question about our keys the same as uh global file names the simple answer is yes now so in real life like like i said here amazon has dynamodb which is the key value store that's used to power the shopping cart in amazon.com there's a simple storage system or s3 which is uh key value storage that's used for some of the big cloud storage services that people use google has bigtable hbase hyper table several of these distributed scalable data storage systems which ultimately come out as key value stores cassandra is was developed by facebook which but it's a key value store that's used a lot of cloud uh processing there's memcached which is an in-memory key value store um that uh for instance redis is an example of something like memcache d that then spans uh multiple sites e donkey emul there's lots of peer-to-peer storage systems before any of these things we did research in peer-to-peer back in the 2000s um and so cord which i'll tell you a little bit about toward the end of the lecture here um tapestry uh was one that we worked on cord was an mit berkeley version um there was a number of other ones that are out there so these are all key value systems that particular work particularly well uh across the globe so all right so the reason i brought this up is i just want you to know that some of the ideas we're going to talk about in the last 20 minutes here are basically used quite widely today now um the question here let's see does files in key value store have a smaller file size requirement than afs i'm not entirely sure what the question here is there's nothing that um sets the particular size of a value so your key is the thing that might be um limited in being a 256 bit hash of something the value is often times something that can be anything from a small number of bytes to gigabyte video or whatever um usually the thing that's the limit is the the maximum size not the minimum size i don't know if that answered your question or not um okay so let's look at the basic idea behind key value stores these are called distributed hash tables oftentimes too so it's like a hash table but distributed right so main idea is we're going to simplify the storage interface so we're going to get rid of all that open close read write complication that we touch at the beginning of the term you know forget all that except for by the way the midterm on thursday and what we're going to do instead is we're going to do put and get and we're going to partition it a set of keys and values across many machines and so this thing here this yellow key value huge table is kind of the abstract space of all keys and values and what we're going to do is we're going to partition it across a set of available machines out there so that you know each machine handles a range of the space okay now i haven't told you how to do that but the idea is in principle that if you think of the space of all possible keys and values actually the spaceball possible keys is really what we're talking about here you could easily say well here's all the machines that i'm going to have participate let's just distribute the keys over them and make it work somehow okay so that's going to be the simple idea how do we make that work well there's some challenges right so one of them is whatever scheme we come up with to do this mapping from the abstract table to the physical locations we want to scale to thousands or millions of machines and so we need to make that index work somehow right that's going to be a challenge and furthermore as i kind of told you a little bit ago we want this idea of what's often called incremental scalability which is we want the ability to add more machines as we need more power and so whatever scheme we come up with ought to be scalable in a way that uh just increases automatically or at least easily the other thing is there needs to be some fault tolerance here so when machines fail because machines will fail we don't want to lose any data and one thing that we haven't talked a lot about with failure because we haven't had a lot of time this term for this topic but if you have a machine fails once a year um on average and then you put 365 of them together you're now going to have a machine failure on average every day okay because failures scale uh inversely with the number of machines so typical warehouses that google and facebook and so on have which have thousands or tens of thousands of machines in them have failures going on many failures per day where machines are coming up with some failure mode or maybe their discs are just plain dying or what have you but whatever scheme you come up with needs to handle failure very well because failure in this instance is not an uncommon thing okay just because of the scale and then of course consistency um is going to be important so remember the cap theorem so consistency says that basically we have some way that many clients that are writing all get to see the readers get to see those values in some consistent way or at least an eventually consistent way where we we all agree on what the latest version is eventually okay and that consistency needs to work even though there's failures happening and you know basic basically our cap theorem says that maybe we can't stay available consistent and and network tolerance uh then we're partition tolerant all the time but it'd be nice that when the thing that failed came back we would eventually converge to something so that's consistency all right and heterogeneity is one that many times you probably wouldn't think about if i hadn't put it on the slide but the the issue here is really that as i add machines over time these machines are all from different um different purchases you know they're from different purchases different lots different years different models and so they're all going to be a little different and so that means there there's this huge heterogeneous mess of machines and network bandwidth and latency and all of those things and somehow we would like this system to mostly work well despite that wide-ranging set of components okay so this is a this is a large set of requirements and you know nothing's going to be perfect but we might want to have some way of building our distributed hash table so that we can handle at least some of these things reasonably well okay and that's going to be our goal okay so some questions are for instance if we do put a key comma value where do we store it well for that's going to be complicated because we got to start by knowing what's available and if we keep adding machines and machines keep failing then the where might actually be more complicated than you might think and then of course when we go to get there's a question of the where of where do we get it from especially if machines are failing maybe that key has moved around a bit since i put it in there originally and so whatever scheme we come up with has got to handle wear very well and then we got to do the above while still keeping our scalability and fall tolerance and consistency and all those other things that we talked about earlier so how do we solve where well one way is we can take the key space and hash it to location all right and and so you know basically if we knew these hundred nodes are definitely going to be used we could build a partitioning that sort of partitions from the key itself to one of those hundred places and you know that might do the trick for us as long as everybody knows that the hash key um but you know what if you don't know all the nodes that are participating or maybe they come and go or what's worse i mentioned this earlier maybe if some keys are really popular then you might have machines in a in a partitioning that was uh you know partitioned equally among the key space maybe some machines will fill up whereas other ones will be empty okay so the the where we have to be careful about trying to keep load balance in addition to all these other things and then look up well if we if we build this thing by having a huge table on one machine that knows where everything is that's going to be a bottleneck and a single point of failure so i hope you guys can realize that at the face of it we certainly are not going to do this which is take this thing that i've shown you here as a big table and put it on some huge database server and use that to look things up okay that we call that the directory approach and i'm going to show you abstractly what that means in a moment but that would clearly not be scalable or fall tolerant okay now before i go a little further i want to pause for a second and see if we have any questions okay so let's look at a recursive directory architecture or uh for put so let's assume for a moment that this directory is a thing it's on a machine somewhere and we'll we'll fix that in a in a few slides but then the way we would do this is if we want to put a new value for key 14 we go to the directory the directory would say oh um i'm going to assign key 14 to node 3. um it would go to node 3 and do the put we would get an acknowledgement that came back potentially or or not but anyway what happens here is the put gets redirected through the directory to the storage server we're going to call this recursive because what happens is the put goes to the directory which goes to the file server so it's recursively going from one point to another the alternative is what we might call iterative and i'll show you that in a moment but how does the recursive get look like well we go get to the master directory it it knows where to go it gets the value which comes back and the directory forwards it on to me and so this again is the get goes to the directory which goes to the node and the node goes back to the directory goes back to the client um another way to think of recursive uh structure here is it's like routing we're kind of routing through the directory here the alternative is often called iterative and in the iterative case which is basically what's happening is we the client says i'd like to put key 14 the directory says oh i'm gonna put that on node three use node three and then the client says oh okay node three please put for me so notice that this is iterative so the first thing i do is i find the location that's and then i go and i do the storage so i'm iteratively working through a set of locations in the network okay and then get iteratively i get back to where the location is and then i can go to that server and talk to it okay so um just putting them both on a slide here we sort of have iterative versus recursive so the recursive versus iterative i should really change that title so the recursive case um is potentially faster because we're routing through the directory server and back it's a lot easier for consistency because we can make sure we know everybody who's trying to change that given location at any time whereas on the iterative side we've got everybody's kind of doing their own thing and they're talking to the storage servers independently of one another the downside of recursive is this directory is definitely a performance bottleneck the downside of the iterative is it's much harder to enforce consistency so they have pros and cons so is it easy to make the system bigger well we can add more nodes um and then maybe we can handle more requests so we can serve requests from all the nodes that have a value in parallel the master we could try replicating and somehow use it to replicate uh popular items okay except the master itself is going to be really hard to make scalable we could try making many copies of it uh but then we got to keep them all consistent with each other we could try to partition it so different keys are served by different directories but how do we do this and so while the the version that i've shown you so far where the directory is a thing um it seems like it's uh definitely going to be an issue from a performance standpoint and as is pointed out in the chat here it's definitely a single point of failure okay and it's it's really a single point of failure for both the recursive and the iterative versions because the iterative has to start by asking the directory where things are okay because remember we're allowing things to move around as failures happen so let's uh let's talk about fault tolerance in a couple of ways so one we could replicate for instance the key on many nodes okay so that basically the um the copy puts on several places so now we never lose data if if a node fails because we have another copy if the master directory fails then we lose availability but in principle we could scan through all of our nodes and reconstruct the directory from the actual data so from a fault tolerance standpoint this particular scheme i'm showing you here doesn't lose any data okay but we still have this directory being uh certainly an availability uh single point of failure at minimum okay but let's also talk about consistency so we want to make sure the value is replicated correctly so how do we know the value has been replicated in every node and what happens if a node fails and what happens if a node is slow so if we want to replicate 12 times and we want to make sure there are 12 copies then all of a sudden the put becomes slow because put has to wait for 12 copies and then it gets back in act and then it can go forward okay so in general if you have a lot of replicas slow puts are going to be par for the course but potentially fast gets because i could get from any of the copies okay so let's look at a consistency issue here so if you do put right now if you look down here k14 is stored on node one and node three and notice that it's v14 is our current version but now some new client tries to put v14 prime and another client tries to put v14 double prime and now if you notice depending on network ordering we could get a situation where the writes between the directory and node one and node three get reordered such that node one thinks that v14 prime is the most recent and node three thinks that v14 double prime is the most recent and really if these two puts were simultaneous there's not necessarily any right answer as to what's the most recent but we want to make sure that the system has has picked one okay and so the problem here is that get is kind of undefined okay so there's a large variety of consistency models out there of what to do when you have simultaneous rights going on there's linearizability which is reads and writes get put to replicas they appear as if there was a single underlying replica so that's kind of like transactions there's an ordering um there's this eventual consistency which i've been talking about where they may temporarily be different but some anti-entropy process eventually make sure that everybody agrees on the most recent copy and there's many others okay and that's a different class but it would be you know i often talk about this when i teach 252 for instance the architecture class graduate class haven't done that in a few years but there you start talking about causal consistency and sequential consistency and strong consistency and so on which is really about what happens when multiple people are writing multiple different key values at the same time how do you order all that okay the simple one i want to talk about today is called quorum consensus and we're going to improve put and get operation performance in the presence of replication doing the following so we're going to say that put we're going to say that there's a n replicas okay and put's going to wait for acknowledgments from at least w okay before it goes forward and we're gonna assume that things are uh time stamped and some way to make this work um and the time stamp basically is gonna let um replicas that see two things coming in it can replace uh an older one with a newer one based on the time stamp then get is going to wait for at least our replicas to say here's a value and as long as w plus r is greater than n what we know is that if the put happened before the get then the get will always get the most recent value because the fact that w plus r is greater than n means that any overlap of uh is going to basically have at least one uh replicas that's got the most recent copy okay so there's at least one node that always has the update and so this quorum consensus is something that's used pretty commonly now in cassandra and a lot of these other systems used by facebook used by other cloud service providers um and it's up to the client typically to pick wr and n um but a typical value is that there's three n is three you write to two of them and you read from two of them and as a result you'll make sure that you'll always get the most recent copy okay now um i'll let this uh simmer in your brain a little bit while you're thinking through this but for instance the interesting thing about for instance you could ask for three you could have r equal to three which really says that i go and i ask for three copies and i wait till i get all three of them before i decide or i could for instance get all three of them and when one comes back or two of them come back then i go forward that's really what's going on so if i say r equal two i'm really potentially asking for all three and taking the first two that come back and that lets me actually tolerate a slow server and this w plus r greater than n actually lets me tolerate failures in that group of n so this quorum consensus has not uh not just consistency uh positives to it but it also handles failures that have happened uh while you're writing between the writes and the reads and so on and it handles slow machines as well so quorum consensus is a remarkably simple idea that has a lot of positive benefits okay and the way you know what the updated copy is is the time stamp so what i said here is in red on this very slide basically when you write you're using typically use a time stamp um that you put in all the copies that go out there and basically the clients and the servers are sorting by time span okay and so responses are potentially returning not always the same value but they'll return r of them and then you pick the the most recent of those are it's a fairly simple scheme but it's fairly powerful okay now and you might use w plus r greater than n plus one for any number of reasons including you know making sure that you're really sure that you've written um three copies for instance etc there there's a fault tolerance and performance reasons for possibly having r w plus r greater than n plus one um so here's an example for instance um here's the initial put where we uh we want to we try to write to all three of the copies but really we only get x back from two which is okay because w equals two all right and so then later when we go to read we read from two of them and um in this case we read from one uh but we always get back uh the most recent so we so even though we've read from two of them one hasn't responded we always get the most recent back because we've got that overlap uh between the um the two that we've written and the two that we're reading we know there's always one overlapping one that will give us our value okay and again the most recent not the thing that's most recent is based on that time stamp okay this thing and that's why i've got this in red here okay so see the red everybody who's wondering about most recent okay all right now if you guys will hold on for just a moment i i'd like to get a couple of more things done here since it's our last lecture um so storage uh the way we get scalability is we want to use more nodes we might have a number of requests we can serve requests from all the nodes that have the values stored in parallel so that's potentially good the master can replicate a popular value on more nodes so we can get more performance with more replicas in a scheme like this to give us the master directory scalability we could replicate it we could partition it so different keys are stored in different masters directories how do we partition it if i were you guys and this is the first time i've heard this lecture i'd probably think that professor cooper towers hasn't really told me how to make the master work okay because this i would have this uncomfortable feeling that yeah this sounds great but that master seems like a problem okay and so let's see if we can do something so load balancing the directory keeps track of the storage available at each node preferentially insert new values on nodes with more storage available okay so i can see that might work when you add a new node what you'd like to do is you'd like to rebalance everything somehow so that new node really starts taking its fraction of the load okay and so that sounds like there's some rebalancing process that i haven't told you about here and then when a node fails we need to make sure that let's suppose that n was three and we were banking on the fact that we had three copies of things if one of those three copies fails we would like to make sure that some other node got a copy so we kept our basic redundancy in there of three okay and so that also i haven't told you how to do that so let's uh as kind of our last topic before we we really uh cut out here is how do we scale up our directory so the challenge here is the directory has a number of entries equal to the number of key value tuples in the system which could be billions or trillions pick your favorite large number and so that directory thing is big and really we want to distribute it in the same way that we're distributing the actual data and the solution here is something called consistent hashing which hopefully i think you may have heard of in other classes but it's going to give us a mechanism to divide the key value pairs amongst a large set of machines but do so in a fully distributed way without ever going through a single directory machine okay and bear with me but the idea is it's going to be simple but it takes a moment to catch so the idea is we're going to associate each node a unique id in a ring of all the possible values from 0 to 2 to the m minus 1. and typically m is going to be big it's going to be the 256 bits in our hash we're going to call that the ring and then we're going to partition that space of possible keys across end machines and all the key values are going to be stored in a node with the smallest id larger than a key okay so let me rather than um trying to catch all that uh in words let me show you this in a picture okay so here's an example of the ring and what i've done here is i've i've made m six okay so this is a six bit hash space really not interesting in the grand scheme other than for class but if you notice if m is six then the set of all possible hash values is from 0 to 63 okay and the id is that means this id space is from 0 to 63 and each node is going to have a unique spot in that space so node 8 you know so this node 8 i'm going to put on a spot of the ring node 15 node 20. how does a node know what its name is well it's going to take things like its ip address and maybe the name of who owns it and all that stuff it's going to put it together into a hash and it's going to hash it to find where its position on the ring is just like the keys are hashes uh over data that we want okay and the way we're going to handle this is for any given number of nodes which are hopefully spread throughout the ring then the node is going to handle every key from just bigger than the previous node so in this example node 8 is going to map or node 15 here is going to map everything from 9 to 15. node 8 is going to map everything from 5 to 8 et cetera and that's going to store those keys okay and if a node goes away then we're going to make sure that the next node up is going to store all the keys okay so this is a very simple scheme for consistently partitioning hash values among the ring okay so for instance the uh key 14 is uh is going to be stored on node 15 because uh node 15 is the um the node whose name is the first one clockwise in the ring from the key i'm looking for okay questions by the way this thing i'm talking about with consistent hashing does not is not going to be on the exam we've talked about key value stores but i want to i just wanted you guys to see a real implementation here okay now the uh the different types of machines we have a mixture of these machines spread throughout okay the key thing to make this work and that's no pun intended is that these be distributed throughout the ring and the way we get that is by having a good hashing function well you don't have to be aware of all of the machines involved so that's the part that's cool about this which i'll have to continue on wednesday you guys may have to come back for this but the chord algorithm is one which adapts to nodes coming and going where the only thing you know about is a local number of the nodes okay so if you look in practice m is really 256 or more okay now chord is a distributed lookup service that does this um if the important aspect of the design space is to couple correctness from efficiency okay and it's gonna um the correctness which uh goes along with the question that was just asked is uh that every node needs to know about its neighbors on the ring and that's it so if you go back here the only thing that node 15 needs to know about is node 8 and node 20. and the rest of the algorithm of cord basically uh takes care of that okay and so um we're going to talk about that on wednesday where we've gone way past our time and court is not in scope for the midterm so that's fine but i just wanted to leave you guys with this uh interesting idea that we're going to show you how to build a distributed system we'll do that on on wednesday such that we only need local information which is about a few nodes in the system and a log number of other nodes that are spread across this ring as long as we know only that local information we can do highly efficient lookup and deal with failure as nodes come and go and do replication in a way that keeps everything safe and so that's going to be cored and we'll talk more about that on wednesday so i hope you guys all come to that because um cord is one of my favorite simple directory uh distributed directory storage systems and so um please come we'll talk about that we'll talk about some other things but i'm gonna bid it due to everybody's i hope you have a great evening and good luck studying for the exam and please come on wednesday because we'll finish talking about cord on wednesday and we'll talk about a few other topics if people show up and if i don't see you on wednesday you've all been great and i'm going to miss these little lectures have a good evening and good luck on the exam |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_4_Abstractions_2_Files_and_IO.txt | all right everybody welcome back to uh cs 162. um as you those of you that are local have noticed today it's like uh we're on mars or something because the sun is red and the smoke is in the sky it's pretty strange but let's see if we can uh get a good lecture out of here anyway so today we're going to continue our very short little discussion of um talking about some abstractions at user level both to help you get going in the class and kind of see what it is that we're going to be doing um in the kernel when we are trying to support these abstractions so today we're going to talk about the file abstraction which is really also the i o abstraction uh which is an interesting thing about about unix and we're going to finish discussing process management which we didn't quite get finished with last time but we'll talk about both the high and the low level file io apis and we'll talk a bit about why we have the different ones and and then we'll look at some interesting gotchas uh that sort of come about when you mix processes and file descriptors and i o and yes to the comment in the chat we are definitely in the upside down today so um if you remember from last time among other things we talked about threads and processes and we introduced just briefly this notion of synchronization now i'm going to talk a lot about that in a couple of lectures but just remember some ideas here one was there was mutual exclusion which is ensuring that only one thread does a particular thing at a particular time one thread excludes the others and that's piece of code that's being excluded from is called a critical section it's typically something that's uh opt that's being operated on um that basically if you have more than one thread in there you're probably going to get some bad behavior and so that's why we call it a critical section and why we need mutual exclusion and the way we did that last time is we talked briefly about locks only one thread can hold the lock at a time and it gives you that mutual exclusion so um we talked about uh two atomic operations we talked about lock acquire and release uh wait until the lock is free and then grab it and then releases unlock and wake up any of the waiters and uh again that was just a brief quick discussion we will get there in much more detail when we start diving into synchronization in a few lectures but uh one thing i did want to briefly uh do is tell you that there's some other tools that we might use instead of just locks and there's a really rich scent of of uh synchronization primitives that we'll start talking about but one of them that i wanted to just mention since you might encounter it uh fairly quickly is uh semaphores and the semaphore is basically uh it's a generalized lock that was uh first defined by dijkstra in the 60s and it's been around since then and everybody uses it inside of uh various operating systems and um it's really kind of like a generalized number okay so a semi-four has a non-negative value associated with it it has two numbers or two operations p and v okay so p uh means that the it's an atomic operation that waits for the semaphore to become positive and then decrements it by one um some implementations call this the down operation and then v is an atomic operation that increments a semi four by one and if somebody's waiting on it uh it'll wake one of them up okay so um p by the way stands for uh proveran to test and dutch and uh v sense for fair hogan which is uh dijkstra's influence on this um what i wanted to give you was a couple of patterns so one pattern for a semaphore is uh very much like a lock we call it a binary semaphore or a mutex the initial value of the semaphore is equal to one and then if you do a semaphore down uh then uh the first thread that does that decrements the semaphore from one to zero and it gets into the critical section if any other thread tries to do that then it immediately gets put to sleep because that would decrement the semaphore below zero which is not allowed so all subsequent threads that uh try the semaphore down are all put at sleep and then eventually when you finish the critical section that first thread calls up which increments the semaphore from zero to one which immediately wakes up one of the threads that then decrements it again so this acts exactly like a lock and it's a mutual exclusion pattern using semaphores and we actually saw the lock we used was called a mutex so that terminology uh gets intertwined between locks and this particular use of semaphores another pattern which is kind of interesting with semaphores which is why they're so interesting they can have many patterns is for instance if we start a semaphore off at zero instead of one then what happens well if somebody executes semaphore down um they're immediately put to sleep okay because they would try to decrement this below zero wouldn't happen they'd go to sleep i'm going to call that thread join for a moment because if another thread then executes semaphore up you immediately wake up the one that did down and so this is like this thread finish join pattern we talked about and this is yet another use of semaphores so um in a couple of lectures we're going to go through a number of different synchronization patterns and you can see that just by setting the initial value of the semaphore to different values you get some pretty interesting patterns okay so notice um by the way the question in the um in the chat here let me clarify just so we know uh the initial value of the semaphore is zero so that means that semaphore down doesn't actually decrement it can't because you can never go below zero so what happens instead is the thread that executes this block of code goes to sleep right away without decrementing the block that executes thread finish increments it to one which then immediately wakes this uh guy up and then he decrements it back down to zero again okay all right now these are these are actually it's it's from the dutch if you actually look at dijkstra but um anyway all of those languages are related in one way or another okay so if you remember also from last time um we talked about uh so non-negative uh value of a semaphore is actually a locking pattern not necessarily exclusively due to the hardware so we will talk a lot more about how you implement these things later so if you notice we're talking about abstractions now so you don't have to worry how they're implemented you just have to worry about the api we'll get to implementing them uh all in due time all right so try to get the pattern and the api not how it's done so um the other thing we talked about of course was processes and uh in some detail and we noticed that there's multiple versions of processes one which only has a single thread another which has multiple ones the key idea is that a process has a protected uh address space and uh state such as open file descriptors which we'll talk about today and then one or more threads and for every thread each thread has a stack and a thread control block for saving its registers okay and pretty much anything that runs outside of the kernel these days uh runs in a process of some sort and the other thing we talked about last time as we talked about how to create processes and that to do that we introduced fork and i'm going to briefly say again what fork does because it is the first time you see it it's a little weird but basically what fork does is it takes an existing process and it absolutely duplicates it duplicates it so there's a new process that is uh duplicate and um that new process has an exact copy of all of the the data in the address space plus copies of things like file descriptors and we'll go into that in more depth um the question here about on the on the chat about whether threads basically share the heap is yes they do so they share the same heap they each have their own stack because if you shared a stack you would actually have a clean execution of any sort and so they don't share stacks but they do share the heap so now so this thing about duplicating is a little weird okay so the return value from fork is uh it's a it's a system call so what you get back is a value and um if that value is greater than one then you happen to know you're running in the parent and the parent process that value that came back is the process id of the child on the other hand if you get a zero back then you know you're the child okay and then you have to call uh get the id to find out what your process id is okay and uh if you get less than zero then everything failed and you didn't actually create a child process okay and so just to to repeat this and we're gonna see it again later in the lecture the state of the original process gets duplicated in both the parent and the child completely duplicated okay the address space the file descriptors etc so um if you looked for instance we looked at this brief bit of code here so uh here we execute fork before we execute fork there's this one apparent process after we execute fork then um we now have two processes and i'm going to say this again because it's just weird right so those two processes are running at exactly the same spot and have exactly the same state until they return from fork one of them returns a non-zero number the other one returns a zero and that's uh the point at which they diverge and are no longer exactly equal okay so the process that calls fork is always the parent but it doesn't know that it's the parent so the way it knows is it gets back a non-zero number okay and it's uh it's parent process uh parent um yes this should say process uh but it's also there's also a thread running there too but uh yes that would be a parent process child process in fact here let's just fix that okay so there we go we are fixed now um if you notice uh the um so we'll talk about what happens when you fork uh inside a multi-threaded process it's not pretty okay so uh we'll get to that a little bit later in the lecture but the bottom line is the only the thread that happened to have called fork is the one that survives and all the remaining uh threads just go poof um their state is around but there's no thread that's actually running so if you look at this example we gave can everybody uh see the screen again now since i went out and came back i think we're good right yep all right so if you notice here we call fork so now there's two processes the one that got greater than zero we know is the parent the one that got zero is the child so the parent with this kind of an if else if else pattern is how we typically write a fork uh pattern and so here the parent goes off and says i goes from zero to to uh to nine basically and writes parent and goes parent zero parent one parent two the child goes i from zero uh to minus nine and basically says child zero child minus one child so on and the thing we talked about last time is this does not get through screwed up because does anybody remember why uh what happens is the parent goes up and the child goes down and they don't prevent each other from doing their task anybody remember why yep they all have their own eyes okay so this eye this int eye starts out as a global variable that's um in the apparent process but as soon as we fork there's now two different eyes one in the parent one and the child and so this uh going up and going down thing don't interfere with each other because they're in completely separate address spaces so you've got to keep that in mind as well the only thing that's going to be a little weird here is the since we're going to be sharing the uh file descriptors for standard out to the screen the parent and child statements are going to get interleaved in a non-deterministic fashion so we won't know how they're interleaved with each other but we do know that the parent will have 10 values and the child will have 10 values okay all right and um they uh the heaps will be separate from the point at which the fork happens okay because the entire address space is copied so it doesn't really matter whether this is uh global in the static space or it's on the heat all right completely new process now here's a question would adding sleep matter here if i put sleep in there would it change the outcome and the answer is no what it's going to do is it might change the interleaving a little bit but again it's not going to prevent the the two processes from running to completion okay are we good any questions on that so the reason it uh matters whether a parent is a parent versus a child is that the parent typically has control over the child in terms of signals and the parent also can wait for the child which is my next statement i'll show you here uh to exit and get its return value from the child so the child process really is a subordinate of the parent so um the other thing we talked about at the very end of the lecture was uh starting a new program with exec and notice this here's the fork pattern we do our fork we say if we're the parent we're gonna wait and i'll talk about this wait for a moment we're gonna wait for the child but when we go to um the child process what it does immediately is it doesn't exec there's many flavors of exec so you should do a man on exec to find out this particular one takes a path and some arguments and it's now going to take the completely copied uh address space from the parent and then it's going to throw out all the copy and start a new program in that address space okay all right so um so uh anyway so this seems a little strange this pattern where the we fork a new child which is a copy of the address space and then we throw out the address space um does seem like uh it's uh a waste but in fact to get the fork semantics as i briefly mentioned last time we're actually going to pull tricks with copying the page tables not copying uh the data and so this is not as wasteful as it seems okay so um just to look at this uh idea of starting a new process here's a typical shell pattern let's just look at this in a different way again notice we fork if we're the pair if we're pid equals zero or the child so we'll exact the new program otherwise we wait and uh if you notice what happens is the result of the fork the child up here says oh i'm the child i'm gonna exec and the parent goes to wait and now a parent is waiting for the child to exit and the child goes off and starts the new program okay so um this is a typical pattern in a shell now i haven't quite showed you how to wait yet that's my very next slide but you get the idea that in a shell when you type a command it actually forks a separate process for the child it runs the program and then later when that program exits which means the child exits then the parent will come out of weight and it goes on to give you the next prompt okay now if those of you have been typing uh commands in your version of uh pintos uh you're typing them at the command prompt that's the shell so that's the process that lets you type commands and have them run and that's homework number two you're gonna actually get to design a shell okay all right command line so bash t t c s h s h all of those things are our shells now um so let's uh look at a couple of other things so weight uh for instance is waiting for a child process to finish and so here's a very simple example i just showed you the weight okay and so there are many versions of weight you should also do a man on that one this particularly simple one takes a pointer to an integer as you see here and that pointer to that integer will get filled with a return code and this particular version of weight says doesn't care which child process it waits for it just says wait for the next one okay and then this instance of this program there is only one and it'll wait till it finishes and then when it finishes we'll actually get back um the pid in that case of the child uh which there's only one and that just finished and its status well what does the status come from well the exit code here so as you all remember 42 is the meaning of life so in this case we exit with 42 um and uh what will happen is that's the child finishing that'll wake the parent up who's been trying to do a join type operation by waiting that 42 will get filled into the status variable we'll get back the pid of that child and now we'll get to move forward okay and of course that pid is going to be the same as the pid from cpid because we only made we only created one child in this instance okay now the last two things i want to show you here they're related to each other is how to how to use the signaling facilities so this was about how to interact with child processes and if you have many child processes then you can actually wait for specific ones etc okay and weight works because the kernel keeps track of parent-child relationships and uh that's going to be something that you're going to get to have a chance to do some implementing with and we'll talk about more uh later okay and this this uh we're not passing the this is has nothing to do with which child we're waiting for we're passing a container for it to put the status in but this particular weight says wait for the next child to finish okay now and if the child said faults or something else causes it to fail that will also uh wake up the weight because that'll just exit with a with a non-zero code uh kind of automatically now um and uh if the child calls exec then um it's still the exit code of the actual child process not the particular code they're running okay so the weight will wait until then the the uh process finishes not this particular piece of code because you're really waiting for the process not for whatever's running in it hopefully that's clear so um now let's look at signaling and so last but not least if you have two processes and you're interested in signaling from one to another remember that processes don't share processes don't share memory unless we do some work which we haven't told you how to do yet and so they have to have some way of communicating and one way is the signaling facility which is kind of like a user level interrupt handler and the way we do that is we have to declare a special structure called a sig action and inside that sig action we can set some flags and some masks for what's enabled and you can look that up but here's a simple thing to do here that uh the sig action structure the handler we're gonna set to this signal callback handler okay and that's this function we've declared here and the same then we use sig action to set that whenever we see a sig int signal call uh use this sig action handler okay and notice that this code is uh you know not particularly great because uh it goes into an infinite loop right while one do nothing so this particular code on the face of it looks like it goes into an infinite loop forever except if you send it a sig int which by the way is what you've got uh it's what you got when you do a control c then that's control c will cause that signal to go to this the callback handler called the callback handler to be called and we'll say caught signal and then we exit at that point okay all right and uh there's a question here about whether we need to do struck sig action saw or sig action saw it depends on whether it's type theft or not so you should take a look in the actual header file okay now good question great question is there a default um i'm thinking sig action isn't necessarily type theft but um it could be in the version of headers that one has because they change but this this you know struck sig actions essay would work now the question that was in the um chat which is a good one is uh what happens if you didn't redirect it so there's a whole bunch of default actions so uh the default action for sigint is actually what happens when you hit control c is it kills the process so uh the default sig action actually kills the process what you can do here is if you don't want control c to kill it but rather to do something else then you can make your own signal handler okay and so there's plenty of default actions now there are some handlers that in fact don't have any default actions or don't have anything you can set okay and so for instance sig kill is a good example if you do kill minus nine uh and you send that to a signal there's or to a process there's no way for it to catch that signal and it will immediately die but simple things like control c have either default actions or things that you can do on your own okay and so there's a whole bunch of posix signals um and sigint is control c sig term is uh the kill shell command uh sig stp is control z et cetera and um so the things like kill and stop are ones that you can't actually change with sig action all right so we'll get to what posix stands for in just a little bit but it's the standard uh for um for the uh system calls we're gonna be talking about talking about okay and uh it is the portable operating system interface for unix where's where the x comes from all right um so just uh to remind you of where we're at we've been talking about the levels operating levels of the operating system and the last lecture and this one we're kind of floating up here in user mode but you got to remember that there's a bunch of things down here uh in the kernel that are providing functionality for us and we need to talk about how we get from up here to here this interface is a system call interface and we briefly talked about it last time you're going to get to learn a lot more about it as you uh design a system call of your own but basically the things that you're used to at the user level all kind of float in the standard libraries and they're pretty much above the system call interface so we showed you this last time this was kind of the the narrow waste or the um of the uh system call interface okay it's kind of like an hourglass or whatever user code above system code running below and then there's the hardware and this system come a system call interface is basically uh a set of standardized functions that you can call that go across users kernel uh interfaces and we're mostly again focusing at the the os library and above what you do with that okay and i i pointed out i think uh last time as well that there's this lib c which is the standard thing that gets linked when you uh use gcc and you link a program and that libc has a whole bunch of standardized functions that you typically call and when you think of c they're often the functions that libc's got and that those functions end up calling the system calls which call the os which is why many of you have not quite seen system calls yet but you will okay so administrivia we are now in full game mode in this class um project zero was due today um remember this is to be done on your own this is just getting uh you used to everything about the projects and compiling them and so on um i also mentioned briefly that we we upped the slip days a little bit because uh of the weirdness of the pandemic and um maybe because of the weirdness of uh living on mars these days uh today which was weird but um i'm recommending that you guys bank these for later rather than using them right away um so uh group assignments should be mostly done um plan on attending uh your permanent discussion session this friday assuming that we've assigned them yet um and remember these discussion sessions are uh mandatory so we're going to start taking attendance as soon as people get used to them um and remember to turn your camera on so that your ta can get to know you because they are going to be your advocate throughout the term so it's important to get to know them the question about when they're going to be out is soon i'm not entirely sure the exact timing on that but it'll definitely be before before you need to attend and attendance will be taken uh through the zoom um so just uh make sure to log in um the other thing that we've chosen now is so midterm one is going to be october 1st as we said on the schedule um it's going to be five to seven which is uh and it's gonna be three weeks from tomorrow so it's coming up on us and um we understand this conflicts with cs 170 but uh the 170 steph staff said basically that you can start the 170 exam after 7 pm and they'll give you some details about that um rather than starting it at 6. all right and our exam is going to be video proctored there's going to be no curve this will be a a non-curved exam so that will reduce a little of the pressure there and it's video proctored which will reduce a little additional pressure and also so you know you're gonna be using uh the computer to answer questions uh so we'll put out more details as we get closer to the exam we haven't put the bins out yet but we'll get those uh for you um semi soon just so you know um this is going to be uh based on previous terms uh for the bins okay um and there are no alternative exam times uh during our pandemic so there's one exam so um that's uh you should uh talk to uh you know send mail to cs162 and make sure you talk about um the uh you know talk to the conflict forums and the fact that discussions are on thursday we'll take care of that okay all right um the other thing is start planning on how your groups are going to be collaborating okay so get um you guys should talk to everybody okay um you're gonna uh we'll talk more about video proctoring but we're also gonna want microphone and video and stuff um but uh basically start thinking about how you're gonna collaborate and plan on meeting multiple times a week i would suggest with a camera right this is kind of the how to humanize things enough that you can actually have interactions we may even give some um we may even give some extra credit for uh pictures of you guys uh all on zoom together uh we'll see how that works uh make sure to fill out the conflict form on piazza if you have other conflicts okay i think that's been out for a while so hopefully people know about them the regular brainstorming meetings try to meet multiple times a week i'm going to give a part of a lecture that i used to give a while ago and i think i'm going to start giving again on strategies for collaborating with teammates uh again it's very hard to deal with this in uh today's sort of virtual environment so we'll see what we can do okay i think that's all the administrivia that i had for today is there any questions okay uh homework one i don't know i haven't looked at the schedule it's uh everything is on the schedule so um whatever so i think it's wherever it is so definitely take a look um i don't think it's due quite so soon all right now let's move on so there was a question earlier what is p threads uh stan or what does posix stand for so posix is a portable operating system interface for unix uh and um just to there's a chat right now about deadlines we will make sure that every deadline you need to worry about is on that schedule okay so we'll try to keep that as up to date as possible so just look at the schedule all right i'm glad um we cleared up that i was pretty sure homework one wasn't due tomorrow so anyway so posix is the portable operating system interface for unix and it's loosely based on uh versions of the system calls uh that were appearing in different variants of unix you should know there are many variants of unix okay starting with the early at t days and then there was berkeley standard distribution unix yay berkeley and a bunch of other ones including uh the one you're working with pintos and so um just among the unix variants there were variations and then um you know and then there were other operating systems that didn't have the unix versions of the uh of the system calls and so there was a standardization effort to come up with a set of standard system calls that operating systems could support even if they had their own unique ones and so in fact if you actually go to look at the the windows system calls uh interfaces there's actually a partial version of posix for some of the uh system calls so you can take a look um the uh and what p thread is the posix threads okay and so that was what piece fred stood for so um let's now talk about this uh unix or posix idea that's kind of the the linchpin of this lecture which is the everything is a file okay so this was actually a little bit of um of a strange idea when it first came out and now pretty much everybody's used to it but there's an identical interface for files for devices like terminals and printers for networking sockets for inter-process communication like pipes etc all use the same interface with the kernel okay and what is that interface well that interface has open read write close those are very standard um variants and the question of is linux a version of unix yes um so open read write close are standard calls and you use those on everything from files on disks to devices etc okay and there is an additional call there's an additional call ioctyl for um those things that don't quite fit in the standardized open read write close or some people call it io cuddle i've always heard ioctyl it's really io control so i call it an ioctal but there are a lot of ioctal calls that you can make once you've opened a device to configure it so it might be things like what's the resolution of a screen what's the uh you know are you blocking or non-blocking etc those are all typically ioctals okay and so when you make a new device and you're developing your device driver interface with the kernel you typically have an ioctal interface for those specialized things that don't quite fit into that uh you know they're square pegs in a round hole as far as open rewriting clothes now sockets the question about sockets and operations on that we'll actually start talking a bit about sockets next time as well so this idea that everything's a file was a bit radical when it was proposed um there's a kind of a seminal paper from dennis ritchie and kim thompson that described this idea um back from 1974 and uh i actually usually teach this paper when i teach 262 because it's an interesting first paper for that class but since i'm not teaching it this term i'm teaching you guys instead i figured i'd pop it up there as an optional reading so if you go to the resources page you can actually take a look at that paper and see how they talk about this idea and how they talk about things that still are well used ideas in unix operating systems to this day and that's from 1974 so it's pretty impressive how some of their very clean interfaces and ideas have lasted so long it's kind of a it's a little bit weird from a research paper standpoint if you've done any reading of research papers we'll read some more normal ones later in the term this one doesn't really have a lot of evaluation but it doesn't describe some ideas so give it a shot so the file system abstraction which is what goes across devices and files and sockets etc uh is pretty much the simple idea that it's a named collection of information in this file system um posix file data is a sequence of bytes as you can imagine the input from a keyboard is a sequence of bytes the input from a disk is kind of a sequence of bytes it's really blocks that then get put into the kernel and then eked out to the user as a sequence of bytes um for files themselves there's actually metadata which is information about that file such as how big it is what was the last modification time who's the owner what's the security info what's the access control on it etc um does it have a set uid bit um or set gid bit on it we'll talk a little bit more about that um later not today and then so a file is like a bag of bits okay a directory is as you well know a hierarchical structure for naming bags of bits okay and if you notice um as you're all very well aware a folder is uh something that contains files and directories and uh what you're going to learn as you get inside the kernel as a folder is really just a file that happens to map names to uh to actual file contents okay and if you look um the uh hierarchical naming is really a path through a graph okay so you start at the root directory which is a file that contains root uh names like slash home means that the root directory slash has a home entry in it which points to a different file which has an ff entry in it which points to a file that has cs 162 etc and opening a file is a path through all of these different directories and you can imagine we're going to want to talk about caching and stuff to make that fast but we don't need to worry about that later and then there's a bunch of other interesting things about links and volumes and things that we can talk about as we get more in detail but we're trying to try to keep things a little more at the user level for the moment so and then tying this all together of course every process graph or tree that's a good question depends on what you're talking about the directory infrastructure you see described in the original unix is strictly speaking a tree we've got the ability to make something much more graph like uh with modern operating systems and especially when you get soft links it gets much more like a graph okay so soft links or sim links as it was mentioned in the in the chat there they're the same thing so um so every process actually has a current working directory it can be um uh set with a system call which you could look up you could do man on chadeer change directory and it takes a path and it changes the current working directory of that process okay so that uh on the face of it is nothing more than just uh a path that looks you know like here's this is a path here home ff cs 162 public html so on but that path is associated uniquely with that particular process that called change directory um and then it can be used now we can still use absolute paths like home osce cs 162. this is an example of a path that's absolute because it starts with a slash at the very beginning of the path and therefore ignores the current working directory but all these other things you're used to you know index.html or dot slash index or dot dot slash index or tilde slash index these things are all uh relative to the current working directory okay and so that's why you might set that current working directory and then you can use file names that look like this so if you say in you know index.html what happens there is it takes the current working directory and then appends to it uh slash and index.html and that's the real file we're talking about so that's why you don't need to have an absolute path for everything you use okay and dot dot is a standard notion for the parent of a directory so if you use dot dot slash index it would actually take the current working directory uh go to the up a level and then down to index.html okay and tilde is actually um a form of absolute so that's uh um it's things it's it's under my relative so this is uh a little misleading it's not relative the current working directory it's under my notion of relative here uh because everything is relative to whatever your home directory has to be so that's that's a good catch i'll fix that okay um so tilda slash index it says my working directory slash index tilde cs162 means the working directory of the cs 162 account all right so those are two different usages of tilde okay so the focus of today's lecture sort of did everybody catch that so there this tilde slash and tilde name slash those are two different usages for different users okay the either the you user whoever you are or the cs162 user okay now so we're going to be working our way through a lot of different things through here okay um it's by the way uh the uh tilde is actually a function of your shell it's not necessarily a function of the operating system so if you think it's too much of a hack then you could use a different shell that doesn't have it for instance um so today we're going to kind of work our way through uh parts of this upper level here okay so for instance we'll talk about the high level io with streams and then we'll get into file descriptors and the system calls and we'll go a little bit below the system call um interface okay but we're not going to get too far down there because we're trying to keep ourselves in the mode of um you know user level here okay so quickly high level file i o for streams so a stream is really an unformatted sequence of bytes could be text or binary data unix is notorious for having no being agnostic as to what the format of files are that was actually also a really big innovation at the time that that unix paper came out and you can take a look um but uh if you if you notice that means that an unformatted secret sequence of bytes with a pointer that's a stream and so here are some um operations uh oftentimes you want to include standardio.h s to the stdio.h uh but for instance f open uh is an example of a high level uh streaming interface most of them have an f in front of them not all of them okay excuse me and f close and notice that f open which opens a stream returns a pointer to a file structure okay and over here we have a mode and that mode is actually a string which tells you about how you want to open that file so you can do things like open it for reading or writing or appending or um etc okay um and some of these options allow you to truncate a file to zero and and so on okay so there's nothing in it if you open it etc so an open stream if we succeeded because the file existed and we have permission then what comes back here is this file star so f open returns a pointer to a file data structure and that file data structure is what we're going to use from that point on to to read and write and interact with that data okay if we had an error we would actually get back a null or a zero from this so we'd get back no file star and so ideally you would actually check to see whether what came back from f open is null or not and that would indicate an error and then you have to go take a look at an error structure to find out why so um standardio.h is the is the file you want to include okay here um include standardio.h uh has all of the the things that you're going to need to be uh interacting with io so if you try to use some of the things i'm talking about in lecture and it tells you it doesn't know uh some of the constants i'm using it's probably because you've forgotten to include that.h file and you're going to want to get used to figuring out what dot h files you need to include because that's going to be a an important part of figuring out how to get your compiles to work okay so uh let's try to keep the chat down a little chatter down a little bit so that we're not distracting people uh in the lecture here so they can ask questions there are some special streams okay s-t-d-i-n s-t-d-i out and std error which um are defined for you okay so standard in is a normal source of input like the keyboard standard out is the normal source of output like the screen and standard error is the place where errors go and usually standard out and standard error both go to the same place which is to your screen okay but these are all defined without you opening them so when your process first starts up you have a standard in standard out standard error and by the way um when we well you'll also have the uh the low level io versions of these as well okay um so standard in standard out basically give you composition in unix okay the reason file is capitalized is because uh it's um it's a structure um and they've chosen to capitalize a lot of the names of of important structures uh the other answer of why the file is capitalized is i guess because it is anyway uh so uh the question about what happens if you open a file but don't close it and then exit uh the process um typically what happens is it uh flushes everything out for you and then uh closes in the kernel so it's not possible for you to to uh cause a major problem by opening something and then killing off the process without closing it it gets cleaned up automatically so standard in and standard out you're gonna see when you start working with your shelves and especially in homework 2 are basically going to allow communication between processes because if you have a whole chain of processes and you manage to connect standard out of one to standard into the next then you can communicate between those different processes in a chain and that will be one of the patterns that you're going to get very used to as you get more comfortable with unix okay so this is an example here the cat command says just take a file and send its output to the console so if you were to say cat hello.txt you would just see and you had a hello.txt file you'd see the whole file just streaming on your screen on the other hand when you put a pipe symbol like this little vertical bar and you pipe it to grep then what happens is cat takes the file sends it to standard out but by putting this little bar i've redirected standard out to be equal to standard end of the grep command and so now grep will take the input that we got from hello text and we'll grep for the word world and it will only output to the screen or it's standard out things that actually have the word world exclamation point in them okay so this composition with bars which you will implement uh on your own in homework two is really a connecting of standard in and standard out okay good now um let's look a little bit more at some of the high level api so for instance there are character oriented versions so notice that all of these commands have a file star pointer into them so we have to open the file first and then we pass in the pointer we got back to something like f put c or f put s or f get c or f get s we put that file handle in there and as a result the file structure then we can put characters that's a type of writing a single character at a time or a string at a time or get characters okay so example here's a simple example where i open uh the file input.txt notice that this is a relative reference so the current working directory is going to matter here and i'm opening that input.txt for reading i'm opening output.txt for writing what comes back from f open is uh the input file structure pointer what comes out from this f open is the output file structure okay and we also have an integer which we're going to use for getting characters so we we uh do an f get c on input that gives us the first character um or end of file eof if there's no character there okay now can anybody tell me if if we know that characters um let's talk talk about ascii characters for a moment um are eight bits why did i use an int for c can anybody think of that okay eof is something that is uh not eight bits right because it's minus one which is really um in representation it's really um all ones in c so it's 32 ones it's a minus one and so we can basically check for uh end of file by looking at that character otherwise we can use it as its character representation of eight bits okay good and so then notice we check and c is the character eof if it's not we put it on the output with f put c um and then we continue uh f get c for the next and so on okay yep exactly like a 61c project so hopefully this is reminding you guys what this is like now um let's look briefly at the block oriented version so those were character oriented block oriented rf read and f right and here um we again we're opening the same files but now we have a buffer okay and so um the uh now f get uh so now what we're gonna do is f read is going to be grabbing a buffer pointer from us so we're going to put the buffer here we're going to say how big the buffer is and we're going to say uh what the size of the the uh items in the buffer so notice this buffer is char uh characters and it's a buffer size in size okay and if you notice um so then what we're saying here is our buffer can take buffer size characters that's what those two things are and here's our input file descriptor or excuse me input files structure pointer and we'll f read will read uh data into the buffer now how much uh can anybody tell me how many characters that this f read command will read from the file anybody have any idea um how many uh how many characters this f read will grab okay so everybody's looking at buffer size of 1.024 and they're all saying 1024. however what happens if the input file only has 20 characters in it this f read how much will it return so it's going to return 20 right because we're going to get 20 characters so this will read the whole file in that instance so just because you give it a buffer that has a thousand 24 characters worth of space doesn't mean you'll get a thousand twenty four okay so the f read is going to give you tell you how many it got then we're going to say while we're getting uh some characters that are greater than zero uh which why would we get zero well if we're at the end of file because we've read all the characters we're going to get zero back so this says well we got some characters let's write those characters out so notice the pattern here for right is here's the buffer this is its length in characters and we're gonna out that's our output uh file and that will write the characters we just read then we'll read the next grouping and we'll keep looping until we're done and then we'll close so this really just copies input.txt output.text okay all right now and uh moving okay so if there are only 20 characters in the file of course we'll run we'll read one grouping we'll write it out and then we'll get zero this time and we won't even go through the while loop a second time okay now you have to take a look at the uh do do a man page on the on the commands to see the exact or um organization okay so we have a question here about why we get 20 so the reason we get 20 is if the file only had 20 in it if the file had uh you know 1025 characters what would happen is we'd get 1024 in this first read length is definitely bigger than zero in that case we would write a thousand twenty four characters out we'd grab the second read would only get one character even though it could get a thousand twenty four we'll go through the loop one more time we'll write that one character out the next read will get zero characters then we'll close the two of them okay all good now uh system programmers that's you guys so the question also will this block depends a lot on what you're reading from if you're reading from a file uh if you're reading from a file it's uh there won't be necessarily any blocking there it'll just read till the end of the file um okay if you're reading from a standard in like a keyboard then end of file comes when special characters are typed like ctrl d sometimes as end of file okay and so no it doesn't have to be 1024 either and these could be something other than characters they could be integers in which case you would uh you'd say size of int and this thing would uh pull things in in quanta of uh four bytes at a time okay characters um depends on whether we're talking about uh unicode or not as to how many bits for now since we're not that's not an issue we want to deal with we're going to say that characters ascii characters are eight bits for now okay um you guys will get to learn more about that later okay so you as system programmers that's what you are now need to be paranoid which means you want to uh always check for errors so for instance we ought to always write code like this i mean you guys ought to always write code like this f open input.txt if input is no you gotta deal with the fact that there was a failure okay always check for null always check whatever the return code is make sure you check it this case the return the fact that there is an error is returned as a null and then you have to do something else like call p error or whatever to find whatever it is this will actually uh say fail the uh open input file and then tell you what the error was um every one of the commands has a way of of giving you an error back if there's a possibility of an error so be paranoid okay check return values it's very easy to be bad as a system programmer not check your return values and then you're going to get code that behaves very badly at the worst possible time okay there's a murphy's law for um bad code okay and uh yes so a language with result such as rust i'm assuming that you're talking about which is totally an awesome language we'll maybe we'll talk a little bit about that later in the term would give you a better way to check but we're talking about c here right now okay and pr knows the interface uh to interact with which is the air no interface that knows how to look for where the error is okay all right i may be a little loose with error checking don't take what uh my looseness with error checking is anything more than trying to make sure the code examples on the screen don't get ridiculously long okay so this is literally do as i say not as i show you in class when it comes to error check all right all right so um i do want to talk a little bit about positioning uh the pointer with your inside of a file so there's f seek which lets you basically set where that pointer is so the next read comes from it so what i've been talking about transparently uh without really saying a lot about it is i said well maybe this f read reads the first thousand twenty four and then when we do it again we start at that thousand twenty four point for the next read why is that well because there's an internal pointer okay there's an internal pointer that's in um in the uh buffering system that's going to keep track of where you are and so you need a way potentially to change that position and so f seek lets you change uh where you're going to read from next and f tell tells you where you're reading from next that tells you where the pointer is and uh rewind goes back to the beginning okay so and um notice that this uh seek command actually has a wence argument to it which basically can be one of these three constants seek set seek end or seeker which basically tells you that when you say go to a given offset what happens well if you seek current it takes the current position and adds an offset to it uh if you say seek set it basically just takes your offset and sets the pointer to that absolute value and then if you say seek end it actually takes from the end back okay and you can look this up but it's preserving this high level of abstraction of a stream um now let's contrast what we've been talking about uh with low level i o okay so kernel unix uh the unis c's which have posix i o have sort of the following design concepts behind them okay um there the question here about whether you need wentz there are different forms other than f seek that actually don't need wince you can just do a man on on fc you can see them okay so um some concepts that went into this which i've already hinted at is uniformity that everything's a file we already talked about that open before using clearly we've talked about that but uh for instance that gives a an opportunity for the kernel to check for access control and arbitration and not return an open file handle that you can use unless you have permission to use it everything's bite oriented okay which is even if the blocks are transferred everything is in bytes so this is the fact that the kernel is completely agnostic on the structure and format of any files or data in the system it has no no requirements except for one particular type of item and that's the directory so the directory has a special format that the operating system excuse me can know how to interpret okay the kernel is going to buffer reads and writes internally part of the reason for that is for caching and performance we'll talk about that but another reason for that has to do with the fact that things like disks are blocks oriented so you can only pull in a block at a time whereas again this is a byte-oriented interface to the user and so we need to have buffering inside reads and writes to uh give us both performance and the ability to to match uh the block structure of the devices against the bytes of the user okay and then explicit close so let's look at this raw interface so notice there's no f in front of open here no f in front of creator close there are some flags that sort of say what access modes you want and what permission bits okay and what comes back from open is not a file star it's an integer okay it's a file descriptor it's just a number and uh if the return value is less than one that's an error and then you have to look at the erroneous variable to know what the error was okay all right um will uh there is no explicit locking of the form that um it's being asked about for mutexes in there you can take a look at that philosophy in the unix paper we'll talk about locking a lot more as we get further um so what so when you get back from an open you get a number which is a file descriptor this is a open is essentially um isomorphic to a system call in fact what's inside of open in the lib c library is a little bit of wrapper around a call system call okay and so the operations on the file descriptor are as follows when you do open and it succeeds you actually get an open file description entry in a system-wide table in the kernel okay and the open file description object in the kernel is an instance of an open file and the question i might ask you is so why did we return a a number that's really a pointer or is really an index inside a table that points at file descriptors rather than a pointer to the file descriptor can anybody figure this out yes security what sort of security anybody guess yeah so there's lots of good good answers in the chat there so one uh this description uh entry is in the kernel so the user couldn't access it if they wanted more interesting there's a philosophy here which is by returning only a number and only allowing you to access a number in the commands it means that there's no way for you to access things you're not supposed to because the kernel immediately checks your number about uh against the internal table and if uh it doesn't match up it just doesn't let you go and do anything there is a little bit of a information leakage advantage to that as well but this is mostly about the security of not being able to address file descriptions you're not supposed to so if you look at some of uh if we look at the parallel to the ones we talked about before there are um standard in standard out and standard error which are the uh system call descriptors equivalent and they're numbered their values are 0 1 and 2 okay and they're in this uni std.h okay and then there's a way to say well for a file star give me the file number inside of it and that's because when you did if you did f open you actually are running a library call that internally calls open and so every file star you've got actually has a file descriptor saved inside of a user level data structure and you can go back the other way as well so the low level file api we have things like read instead of f read so real read takes the file descriptor integer a buffer and uh the maximum size of that buffer in bytes in this case it doesn't quite have the flexibility of f read and it'll tell you how many came back okay and if you get zero bytes you get an end to file and if you get minus one byte you have an error uh writing is similar and seeking is uh kind of the equivalent of fseq we talked about earlier so here's a simple example where we do open here's the name of the um of the file we have uh the following flags for the fact that we want to be read only um and uh various uh um permissions that we want to have on that file okay and we open it we get a file descriptor back we read from it okay we close it notice that read and close have to use that same file descriptor okay and then right we might uh open or we might try to write something to that file descriptor okay but if you notice when we've closed the file descriptor by the time we get around to writing it uh it's already closed so that could be an error right there's lots of errors that can come back the file being bigger than max size is not going to come back as an open error that's going to come back when we try to write on it of course okay so how many bytes does this program read well we look at what came back from rd and that tells us how much we read so design patterns again just to tell you this this is actually at the system call interface always open before you use it's bite oriented and you have to close it when you're done okay reeds are buffered inside the kernel rights are buffered inside the kernel for lots of reasons we talked about this buffering is all part of a global buffer management which we'll also talk about when we get to the internals and you'll see why the demands of things like the file system and the buffer manager and so on require that caching but also that it can give us good performance as a result so some other operations in low-level io we talked about iactls okay this is an example of when you open something that's not a file in the file system but rather as a device or whatever you might call some iactyls on it you can also call you can also use ioctyl on open files for certain issues about blocking and non-blocking and so on we can duplicate descriptors okay i'm going to show you that where you have an old descriptor and you get a new one out of it okay and we can also make pipes where we we create a brand new pipe which is two file descriptors uh two integers in an array and then if you do fork um then you have uh two ends of a pipe that the two processes can use to communicate with each other and that pipe command is exactly what you're gonna use to set up pipes when you do your shell okay and there are ways to do file locking but it's not a mutex per se it's uh it's locking that's specific to the actual file systems okay and ways of memory mapping files so that'll be another interesting thing that we'll talk about once we get a little bit uh further along in how uh things like page tables work we'll talk about in fact how to take a file and map it directly into memory so that now you can do reads and writes to memory instead of reads and writes to the file system so you'll be actually uh looking at at memory uh and structures and so on in memory rather than executing read or f read and write or f write calls okay and we'll talk about asynchronous io a little later as well so why do we have a high level file i o well high level file i o first of all to look at it we have something like f read what happens when you execute f read is there's a bunch of work being done just like a normal function in the library and some of that work is about checking to see if the thing that they're trying to read might already be buffered in a local user level buffer okay and if not then it goes ahead and does this pattern we talked about last time or the time before and how to actually do a system call we have to set up some special registers with the system call id and the arguments et cetera and then you do a special trap that goes into the kernel and does the system call and comes out okay all right low level is an example in which where the read really just does the system call so read is essentially just a c level wrapper for the system call f read is something more sophisticated now there was a question in the chat about what i mean by buffering what i mean is you may do read you may read 13 bytes at a time but the underlying system is maybe optimized for 4k bytes at a time what f read will do is it'll actually ask the kernel for 4k bytes and then put it into a local memory data structure and then all the subsequent f reads you do for a while just look in that buffer and grab the next 13 bytes without having to go into the kernel much faster okay because kernel crossings actually take some time okay and so streams as i mentioned are buffered in memory and so one of the ways you can see this for instance is if you do printf beginning of line so printf actually goes to the buffered version of standard out and you do a sleep and you say end of line what happens is when that finally gets flushed to the console possibly because of that control or that new line there everything gets printed at once it says beginning of line and end of line as a single item whereas with the low level uh direct system call you might do write to standard out file file number so the standard out beginning of line you wait a little bit and then you do the same thing with end of line and what you'll see is the word beginning of line on your console you'll wait 10 seconds end of line so there's no buffering in this path at the bottom but there is buffering in the up in the path up top okay so um yes so now you're starting to say some interesting uh questions here okay so um the 18 and 16 have to do with the number of characters we're writing there by the way um so uh the question you might ask is uh is there buffering the question that was asked is there buffering in the kernel if there's buffering at user level yes there's two different buffers going on okay the buffering in the kernel is completely transparent to you there's no way for you other than timing uh and failure your system to really know that buffering is going on in the kernel buffering and user level can make things much faster but you can do things in a way that mixes things up quite a bit uh if you're not aware that you're using for instance uh the the stream version of a file um and the raw version of the file together and that's usually a problem okay so what's in a file star well as we mentioned the file star has user level buffering so inside of it it's got to do the raw calls and so it's clearly going to have a file descriptor inside of the structure file star that structure is going to be in your program okay and so when you do f open what happens is f open allocates a new file structure then calls the raw open and then returns and some buffering inside the file and then returns the pointer to that structure from its library to you okay so buffering inside of a file is done at user level so when you call f right it's put into the files buffer until you flush the c standard library may choose when to flush out to the kernel if you really care that something is visible in the file system then you're going to need to do flush on your own okay and so you want to make sure that you're not expecting things that you just wrote with f open but you do have open f right and you're doing something else you don't necessarily know that that's gone to the file system unless you do flush flush okay so weakest possible assumptions about whether things got from user level into the kernel or not so here's an example where we do f open a file.text we write something okay to uh we write a b to that file okay and then we do uh f open file.text again so notice we have two copies of the file open in two different uh file star structures and so if we go to read from the second one we're not necessarily gonna see the first one okay so this f right here may or may not have gotten into the kernel depending on whether it got flushed or not okay all right because we've opened this file twice two different bufferings in the kernel we've written to one and we haven't flushed in two different uh bufferings in the user level we haven't flushed it out so we don't really know what's going to happen so if you're going to write code like this be aware so notice what i changed here is i wrote the data then i did an f flush at that point all the data that's buffered gets put into the kernel and now this f open and read will get the data okay so just be aware that when buffering is going on and you start doing weird communications you got to be careful okay and if you close the first file then yes it'll get flushed out okay so your code should behave correctly regardless of what's going on so make minimum calls to flush and uh with the low level api you don't have this problem so if you only do open reads and writes you're not going to have the the problem of different users of the file not seeing the data because the kernel hides all of its buffering from the users okay but uh you don't get the performance advantage of all the buffering in user level and why do you want to buffer and user level i just wanted to show you system calls are 25 times more expensive than use than a regular function call so if you look uh here the blue is time for regular user uh just function calls the green is system calls for doing get pid in this case and the red again is a version of get pid that doesn't have to do a system call okay and so notice that it's much better not to make system calls if you can avoid them okay so if you read or write a file byte by byte the max throughput for instance might be 10 mega bytes per second whereas if you do f get c which is a buffered single byte by at a time you could actually keep up with the speed of your ssd why is that well fgetsy is a buffered command and so you're giving it a file star and what happens is the first character you read goes into the kernel brings a big block of data into user level and then the subsequent f get c's just quickly return you another character until you use up that buffer and then you make another system call this is exactly a form of caching okay exactly and uh that's part of the reason that you can run into trouble if you use it uh incorrectly so system call operations uh why why buffer in user space now so in addition to performance we want to keep the kernel interface really clean okay so the operating system doesn't know anything about formatting okay for instance there's no way to read until new line from the kernel because again the kernel doesn't know what a new line is that's a that's a feature okay so what the solution is is you use the buffered calls like f get s or get line that take file stars and what they do is they read a chunk of data out of the kernel and then they just very quickly walk through until they find the next new line and give it uh give the whole line to you okay so now let's talk a little bit about process state okay if you notice here we're kind of moving our way down to the bottom a little bit but the kernel on a successful call to open has a file descriptor returned to the user and an open file descriptor is created in the kernel okay and so for each process the kernel maintains a mapping from file descriptor to open file descriptor description in the kernel and then on all the future calls the kernel looks up uh the file descriptor it gets to find uh the actual um description structure okay so here if we notice we have two buffers we open um food.txt and then we read here we read from that file descriptor into buffer 100 characters why does this work well the kernel remembers because you opened it that fd the number is talking about the file fu.text that's all cached okay and therefore just calling read uh knows what file to work with and furthermore it also knows to pick up where it left off so this read gives you 100 characters this read gives you the next hundred characters and why because that's stored in this file description in the kernel okay so what's in the file description well you could look it up right you guys have uh pintos you can check it out the things that are important for today here are um the inode structure which is an internal file system thing we'll get to soon enough tells us about where all the blocks are on the disk for your file and the offset tells you kind of where you are in the stream okay so what's the abstract representation of a process so if you guys bear with me a little bit there's a couple other things i wanted to say before we're done um so remember a process has got threads registers etc it's got uh memory for the address space and then in kernel space we've got this file descriptor table which maps um numbers that are file descriptors to actual descriptions of files so if we we execute open for.text and it gives us back descriptor three this is what happens we have descriptor three in your process points to an open file descriptor table description table in the kernel that says the file is fu.text and it's at position zero okay and not shown is uh descriptor zero one and two so i started at three and uh hopefully we'll get to zero one and two here in just a second um but now suppose that after we open the file we say read descriptor three which is this file uh into the buffer the next hundred characters well what happened was we read the next hundred characters into that buffer and we're at position 100. and notice the kernel knows what position that file description's at it's at position 100. okay finally if we close what happens there is the file descriptor table is cleared and the file description is cleared and voila we've just finished that off okay but let's do something more interesting so let's not close let's fork okay so here's process one here's a child process we just created notice that we have the address space uh is duplicated we've got uh the thread control block i'm assuming there's only one thread for a moment and the file descriptor table is duplicated so now both process the parent and the child point to the same file description so that means that either of them can um can read from that file okay so if this process tries to read uh 100 bytes from file descriptor 3 then it's going to read 100 bytes and will now be at position 2 and now this guy does the same thing and voila we're at position 300 because we have uh forked the process and they're sharing the file descriptor all right it's copy so now we start to see what it is that fork is doing that's more than just the address space okay and if this process one closes the file notice that all that that does is it only removes the file descriptor pointer to the file description because that pointer for that file description is still in use by another process so there's a reference count on there and the fact that process one closed it does means that process two still has access to it okay um so if we wanna uh if you're asking can we copy this open file description for process two if you fork process two you'll get a copy of it again okay the only way to get a new file description that's unrelated to the old one if that was your question is by doing another open of the same uh file okay so why do we allow this well aliasing the open file description is a good idea for sharing resources like files between parents and children processes if they're working on the same thing together okay and remember in posix everything's a file so this really means that both the parent and the child is uh both have um access to the same resources the question was why is this 300 and not 200 if you notice the point at which the read happened we went to 200 notice that the uh this process goes to read another hundred bytes from file descriptor three if we look up three we see that yes indeed here's the file description the pointer is at two hundred and so when we read the next three hundred next hundred bytes we've just advanced it to 300. okay so um so when you fork a process the parent and child's printfs go to the same terminal so this is one of the last ideas i want to finish up but let's take a look and this is going to be very important for homework two so hold on for a second here there are a set of three standard file descriptors that are always allocated we already talked about them zero is for standard out one is for standard in and two is for a standard error so zero is all the inputs from keyboards one is the standard output that has no errors and two is the output for errors so if a process that happens to be say a shell forks another process which might be a child process it gets copies of all the same file descriptors this is why if we have a parent that forks a child and the two of them are both printing output notice that descriptor zero is shared and so the outputs go to the same terminal interleaved okay and that's the standard way that a command that you type at the the command prompt for a shell works which is why when you type a command and it's printing it goes to the same terminal that your shell was running from okay so if we close uh standard out in process one we don't close it in process two same with standard in okay the only thing that will change standard outer standard in is if you change them okay which is uh the question here is if you have two processes both on standard in uh wouldn't they uh duplicate the the input the answers no it's whichever one reads uh first gets the next character and vice versa okay there's only one copy of things coming in so other examples are sharing network connections after fork sharing access to pipes these are all things that when we start getting into more interesting patterns are going to be there okay the the final thing i wanted to show you here is about dupe and dupe 2 which is uh for instance suppose we've got uh file descriptor 3 pointing at this description and now we uh we execute dupe of three so what dupe of three is gonna do is it's gonna make a new file descriptor for okay which points at the open fi the same open file description that three was and so after dupe now we have both three and four pointing to the same file and we could if we wanted close three and still use four okay dupe two allows us to do something a little different which is basically allows us to take file descriptor 3 and duplicate it and call it file descriptor 162 and so now we've chosen which file descriptor we wanted to use whereas dupe chooses us i want okay and when you start getting into the shell with homework two rearranging what descriptor zero one and two do is how you will make uh pipes from one command piping to the next and so on if you remember cat uh piping into piping into grip okay all right and i think we've run out of time there are some fun uh things i guess we did have enough questions i wanted to just give you this one which is a fork in a multi-threaded process everybody's asked me about this don't do it uh unless you really know what you're doing and aren't going to be surprised so here's an example of a process that not only has some file descriptors but it's got multiple threads a red one and a black one if you fork and suppose it's the black thread is the one that runs the fork command then when you're done you've got duplicates of all the file descriptors and address spaces but only thread one's still running so this is unlikely to do what you want unless you're really doing what you expect okay all of the memory that the threads had will still be around but the threads themselves won't be running okay if on the other hand you exec that's exactly right that was a good question then you throw everything out and you get a brand new process and that's probably will do what you expect okay okay safe if you call exec um the other question about does dupe always assign the next int um i wouldn't count on that if you had anything that depends on that i wouldn't count on it it basically gives you one it's probably the next one but you never know for sure okay and what does exec do exec erases all of the processes address space and loads it up with the new process okay all right i think we're we're over time so i'm just going to say in conclusion we've been talking about uh user level access to to the file i o and some of the user interfaces that you're going to become really familiar with okay and the positive idea of everything's a file uh is a pretty interesting one i encourage you to take a look at that original unix paper all sorts of ios managed by open read write close uh an amazing amount um and we also added some new elements the process control block like mapping from file descriptor to open file descriptions the current working directory etc so i want to wish you all a great uh weekend and we'll see you on monday and sorry for going a little bit over have a have a great weekend everybody you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_21_Filesystems_3_Case_Studies_Cont_Buffering_Reliability_and_Transactions.txt | welcome everybody to uh 162. we've been um talking about file systems and we were actually going through some case studies last time of some real file systems and i would like to continue that but first i want to set a little context to make sure we're all on the same page here so on the left side of this diagram we have the user interface that's above the system call level where you open and close files and so on and read and write them using a path name that path name is resolved by a directory structure which ultimately finds an i number which is just an index into the inode structure on the disk and there may be several of them i'll show you that in a moment but that i number then gives you enough information to find a structure that then points out which of the data blocks are part of that file and ultimately they're on the disk so this discussion that we've been having including the fat file system and the berkeley unix file system etc is all about these structures that somehow map from the byte oriented file paths uh that you're used to at the user level down into individual blocks on disk and puts it all together so that it looks like a file system okay and so you know we had this for instance we talked about the fat file system this was the simplest example of that where we have a whole bunch of disk blocks which are linearly addressed okay one two three four five six seven the uh the file access table or fat basically has a one-to-one mapping with each block and all it does is it provides a way to link blocks together into files so here's an example where the file i number in this case of 31 represents a file whose first block is here and then that points to the next block which points to the next block which points to the next block and so we have four blocks each of which might be say 4k or what have you in size and that's a complete file okay and so that's uh one very simple index structure and this is one that's lasted for um you know since the 70s basically so that's pretty long time um and it's in all of your cameras and usb keys and so on that you pass data around with and of course there are certain things that are important here that we talked about like where is this data structure the fat which is essentially just an array of integers where is that stored it's stored at a well-defined place on the disk which is the beginning of the disk um usually there's a couple of copies of it to handle errors but there's a well-defined place on the disk that the file system defines and that the operating system knows about all right so were there any questions on the fat file system before i move on so that's a good question can you usually format a usb key to some other file system yes you can often format them to other file systems uh the reason that most cameras and other devices use the fat file system is it's so simple that it's easy to put into firmware so sometimes when you do that formatting that you talk about there you might be only able to plug it into a bigger machine like a linux box or something that is running a different file system rather than a camera that's running the fat file system that's a good question all right any other questions so um the other thing the next file system we talked about is we started with the 4.1 bsd file system this is a slightly different picture than i showed you last time but it's the same idea and here the inode or the index node has some metadata at the top such as what are the mode bits is it read write execute what are the owners time stamps for modifications how big things are and so on and then the important part here is this multi-level tree where there's some number of direct blocks okay and in the original bsd file system there were 10 direct blocks that later got expanded to 12. but those direct blocks are really pointers from within this inode structure to a block on disk and what does a pointer mean it means what a number of that block on disk all right and then the singly indirect pointers is a pointer to a block that has a bunch of pointers in it so the direct ones you can follow directly to get to those data blocks that singly indirect you follow to a block which then gives you pointers to a set of blocks and in the original bsd there were 1k blocks and 4 bytes a pointer and so in some sense there's 256 data blocks within this indirect block doubly indirect basically you have a block which points at a block which points at data okay and so for instance you can very easily see that um since there's 10 direct pointers if you wanted to go for block number 23 well the first 10 of them are direct blocks the next set of them the next 256 of them you'd actually have to read the indirect block first and then the data block and so there's two block reads okay yes and so the question here was was the design decision to go from 10 direct blocks to 12 um you know what was the story there basically that was built on data they decided that was the right thing to do at the time you know sometimes decisions are made uh for reasons that aren't always the greatest but that one i believe was actually made because they figured they needed a couple of more direct pointers there um you know so you should be able to do this kind of calculation uh on a test for instance like how about block number five where is that when we know the direct blocks the first ten of them are here so block number five would be you know zero one two three four five okay and then block 340 that's going to be into the doubly indirect ones and so you're going to have to do to read the the double indirect block and then the direct block and then the data okay good so the pros and cons of this scheme were really that it's very relatively simple and as we talked about last time and the time before um is that uh basically it's supports small files really well but it also handles large files now the question about how do we know that 343 is in the doubly indirect block range well because th these data blocks you see here are all linearly laid out and so the first 10 blocks are in the direct blocks and then there's 256 now so that's taking up 266 blocks so block number 340 is clearly larger than 266 and so that's why we're getting into this range over here and it's not large enough to get into the tripoli indirect region does that answer your question so there's no no metadata uh in any of these pointers these pointers are just pointing to data okay all of the metadata is in the inode itself and so um that's important you can kind of think the inode is the file or the file is the inode because it has all of the metadata about who can read and write it etc and all of the data is pointed at and these everything to the right of the inode is just raw data blocks with no nothing else other than either data which is just binary or pointers to data blocks okay now the downside of this uh well so when reading a file the question is how do you know what the next block to look at is well simply um if you remember the file description that you get when you do and file open keeps track of what your offset in the file is and so once you know the offset what byte are you on then you can divide by the the block size and that'll tell you what block you want and then it's a direct it's an easy mapping just like we did here block 343 is always you know down in here and so that piece about how do you know what the next bytes are our next block is that's because the uh the next byte pointer is kept in the open file description that's part of your uh file description you got when you did an open okay so you can look back to lectures from about a month ago where we talked about that in more detail all right so downside of this is there's nothing in here and there's certainly nothing in the fat file system that says anything about making this perform well so ideally what we'd like is successive data blocks are laid out on the disk on the same track or close by tracks in order to make things really high speed bsd 4.1 didn't do anything to help with that and so as a result you'd format the file system from scratch and everything would be nice and fast and then over time as you wrote files and deleted files and created files and so on things would get progressively slower and that's because things would be scrambled in late in locality on the disk itself and so the bsd follow-on or the fast file system which is what we were talking about when we ran out of time last time did a lot of work to try to make things perform fast and well and essentially they kept this layout of the inode although they had a couple more direct blocks and they made the data blocks larger okay and um so let's look at this a little bit this is the fast file system there's a paper that i put up on the resources page that you can take a look that talks about this fast file system it's got the same inode structure modulo a little bit one thing i said last time that um i was incorrect on sorry about that i had a typo on my slides basically the block size in the original system was 1024 and that went up to 4096 minimum although there was also options to have slightly larger blocks okay the paper is up there you can take a look at it and there's a number of performance and reliability optimizations that were done in the fast file system so as i'm going to show you rather than putting all the inodes on the outer tracks they're distributed throughout the disk uh there's bitmap allocation instead of linking things in a free list and if you have a bitmap where a one means in use and a zero means free you have a much better ability to take a look at the bitmask as a whole and say oh here's a big string of zeros free blocks that i'm going to start a new file in so that there's ability to lay things out well okay and so part of what was done in the fast file system was an attempt to actually allocate files contiguously and address some other performance issues uh like skip sector positioning which i'll tell you about a little bit and one of the interesting things that uh you might or might not realize is that by forcing the fast file system to always keep ten percent of all of the data blocks in reserve uh meaning that when the disk is 90 full it appears to be 100 full keeping 10 percent it turns out is a high enough number of free blocks that the likelihood of finding a big string of empty blocks together on a track is much higher and so it turns turns out that 10 is uh is a an important aspect to getting good performance out of the file system okay so first thing that i said was they changed the inode placement and if you look at the early unix file systems and windows fat file system etc headers the or the i nodes were all stored in special array on the outermost cylinders and it was a fixed size array and so the number of inodes was fixed at the time he formatted and each is given the unique i number and so you can say for every i number there is an inode which means there is a file associated with it that's okay except you can imagine it's got some pretty big performance problems because uh the inode is stored far away from the data potentially and it's also got some reliability problems because if the disk head were to crash on the outer cylinders and trash all the inodes you've effectively lost all the files even though all the data is still okay okay so there's a number of reasons why this particular layout wasn't good and so here's the put them down here is problem one and two so i know it's all in the same place can potentially uh lose a lot of or all of your data when you lose the inodes and when you create a file you don't really know how big it is and so there isn't any really good way to handle the layout of the blocks relative to tracks when you sort of shove all the inodes in one place okay and so let's take a little bit of a look at what they did instead so in the fast file system they divided each platter into a whole bunch of groups okay so block group zero one two etc and so rather than for instance putting all the inodes on the outer track for the whole platter what you do instead is you have some inodes and a free space bitmap for each block group okay and so what's good about that is that now the inodes associated with a given file can actually be in the same block group as that file okay and if you choose to have a directory with files in it both the inode for the directory and the inodes of all the files in that directory can all be in the same block group along with the data and so things like ls that ls-l or whatever that give you the metadata of all of the files in a directory can run very fast because of the locality that we've gotten out of this okay and so the file volume is divided into block groups and it's a set of tracks so moving the head back and forth within a block group is not that ba that bad um and so we've got data blocks metadata free space all uh arranged within the single block group and so just by going to a block group with some extra space you have the ability to lay out things very efficiently okay all right i think i said all of that so um when reading let's see so uh furthermore the the way that the layout algorithm works if you remember one of the problems that we have with the unix interface is that when you open or create a brand new file it's empty file system doesn't know how big you want it to be and it only figures it out on the fly as you're writing to it so what the fast file system did was for small files when or files that have just been created it fills holes in the local block group and then when it crosses a certain threshold it goes to a whole nother block group and finds a big string of empty blocks to continue on and so there are these thresholds that actually cause you to go to another block group okay and the feeling there is that if you're running a big sequential read and every so often you have to switch to another block group that's okay because you're still getting relatively high performance okay with these occasional hops rather than having to go back and forth and back and forth for every block okay and again it's important to keep about ten percent free in order to make this work okay so a block group is so good question is a block group the same as a cylinder group well a block group is what you see on a surface and if you remember there are a whole bunch of platters on top of each other and so if you take the block group and you go through the whole stack that's a cylinder group okay so that makes sense so the cylinder group is the set of rings of a block group going through all the platters and you remember the reason that we talk about that that way is because the head all of the different heads in the head assembly move together as a group and so in some sense we move into a cylinder and so then a cylinder group would be a group of cylinders that are all uh small movements of the head okay good question now the uh so the summary of the layout uh pros are for small directories you can fit all of the data the headers and and uh etc all in the same cylinder with no seek for small directories and small files so that works out well the the file headers um the inodes are actually much smaller than a block so you can get several of them at once and so that really optimizes for doing uh directory optimize or operations that use all of the files in the directory so that works really well and then last but not least this is certainly discussed in the paper and it's an important side effect of this is that by putting the inodes close to the data what that means is if the head crashes or that's where the head touches down on the spinning disc and uh takes out it a uh track or set a whole block group even all the other files are fine because their inodes are safely next to each other next to the data in other block groups so this is a big reliability advantage as well okay good yeah you could think of this so again think of a cylinder group as all of the block groups on top of each other on the different platters which are double-sided okay so just to say a little bit about the allocations so if you remember there's a bitmap per block group and so um for instance this uh it's just one big uh it's defined in a set of blocks um at the beginning of the block group there's just a set of bits that are together that tell you which blocks in the block group are free so we have ones that are in use we have a couple of free ones and so we do is if we write a couple of uh of blocks in a file it just finds them very quickly because it can just look at the bitmap okay and then when it is writing a large file it's easy to figure out which blocks are available again by looking at these long strings of zeros and when you get past a certain threshold as i mentioned you go to a to another block group and pick a um pick a big string of zeros to write on okay and this is basically the heuristics their heuristics to keep this overall speed of the fast file system fast even over time as you delete files create files and write files okay good the last thing i wanted to give you and i an idea about is the rotational delay problem which is an interesting one for old systems so the issue is that if you remember as the disc rotates the head is picking up all of the various sectors that are in the track that it's on and in some of the older systems back in the beginning of the fast file system you'd read a block or you know series of sectors off the disk into memory and you'd uh notice that it's you have to work on a little bit before you went to read the next thing and by the time you got back to read the next thing it had actually passed under the head and so you had to wait for a whole new revolution to get the next block and so if you're not careful you everything slows down you do such a great placement on the track and because the blocks are too close to each other on the track you actually miss them okay and so what the fast file system did was it did what's called skip sectoring which is it calculated kind of how much time was needed and so for instance all of these magenta blocks on a given track are all part of the same file and you put that extra space in there so that uh as the blah as the disk is rotating you grab a sector you process it and when you're ready for the next sector it's coming underneath your disk head okay and that was called skipped sectoring and they they implemented that part of the fast file system paper today of course as i've implied a couple of lectures ago we actually have a whole bunch of ram on the controller and so what happens on today's disks is you just read the whole track into a ram track buffer and then subsequent reads it doesn't matter how long it takes to get back to them you can pull it off at high speed without worrying about the physical rotation okay and so this is a good example of something uh that was solved back in the original fast file system day that has been obsoleted by smart disks now there's a track there's a question in the chat there of does the fast file system get used anymore yes so its descendants are basically in the linux ext two and three the um bsd versions of ufs file system and so on so the descendants of that code are all still used okay so this is this is not a just a historical artifact except for the rotational delay fix um so it's a it's a useful file system to know about okay so modern disc plus controllers do all sorts of things and i mentioned not only do they do full track buffering they also run the elevator algorithms and to some extent they figure well to a to a large extent they figure out which blocks are bad and they hide that even from the operating system in some instances by transparently mapping good blocks in over bad ones okay so the pros of this fast file system which was in the sec the 4.2 version of bsd so it's a very efficient storage for both small and large files that just comes from the structure of the inode it has good locality for small and large files that's because of the way that the block groups were divided up and inodes were spread about and there's a good locality for metadata and data and you don't need to do any defragmenting to get performance back unlike earlier versions of the file system the cons are it's still pretty inefficient for tiny files because for instance if you think about it a one byte file actually requires both an inode and a data block okay so let's look at this for a second you know it's it's surprising but if you let me just go back to this if you look at this layout it doesn't matter how big the file is you still have to have all of the inode structure and then you have to have a complete block for the data and so if you have a few bytes in your file it's extremely inefficient okay and that's just part of that's one of the consequences of this layout now this does do quite well for small files in general but for really really small files it's not so good and there's always this inode separate from the data okay so we can do something about that um which is what was done oh we'll get to that in a second uh that's which is what was done in the ntfs and i'll tell you that in a second i did want to say do a little administrivia which is we're done with the grading i know i said that uh we were going to be fairly soon i think that's come out over the week weekend we had a higher mean this time 55 standard deviation of 15 so that's about standard for this class and uh as we've talked about before there's a historical offset here of 26 so you can take your grade uh whatever you got and add 26 to it and take a look at the um at the various bins that we put up on the website to sort of get an idea what grade you got on that exam all right there's no class on wednesday uh so um you i guess you won't be hearing me on wednesday and you can take that time to do a breather and get outside a little bit the other thing that i wanted to mention is again i said this before but make sure if you've got any group issues or if you have a group member that's mia make sure that you let us know when you do project evaluations and that your tas are well aware what's going on okay that's to a maybe we can reach out to them and help that situation where we couldn't otherwise or maybe we could get you all together to talk try to make sure that project three is smooth and certainly important for us to know that when it comes to awarding points for the project at the end so okay i don't think i had any other administrivia anybody have any questions or i know that the regrade requests and so on are still going on so okay so i guess with that i'm going to move on now and so this issue that we had with the standard kind of indexed file system is this idea that the inodes are separate from the data which actually works pretty well for most size files the question is could you do something different and it's just uh this is an answer to the question from the chat earlier is this idea of is the fast file system even used at all and the answer is yes so the descendants of the fast file system have found their way into linux in the uh the axt3 file system is one that is pretty standard in linux these days uh and there's also in the um bsd freebsd has a variant of the original fast file system as well but here's an example of there's block groups laid out in ext2 or three i'm going to keep them together for a moment because they're uh they're effectively the same thing in one instance and that's this layout uh so here's our block groups we have a group descriptor table that's kind of along with the super blocks at the beginning of the disk in a well-defined place superblock is describing information about the file system as a whole if you look in that descriptor table you can figure out where block group 0 is et cetera the other thing that linux has got is you can at the time that you format a new file system you can pick the size of the blocks you can make it 1k 2k 4k 8k 4k is pretty standard very similar to 4.2 bsd with 12 direct pointers if you look um here the uh for instance you could say well what if we want to create a file in slash deer one slash ext3 what happens there is you got to find the root directory so you go to um a special spot on block group zero and that in that inode table you say well inode number two is uh where the root directory is and so that's points to um say block in the root directory points to block 258 for the actual data so the the inode there points to the data for the root directory you look up dear one that says oh that's going to be inode 5033 you look down at inode 5033 which is in block group 2 for instance in there you start looking at the data and it points to say block 18 431 which is the data for directory one and you look in there and you can find um you can create file one as an entry in that directory and you can allocate a block for uh for the data okay and so this is kind of the way to follow these pointers is what you should think about is that every pointer here represents a block number that's being pointed at by some other block and that's how we get all of these uh the structure of the file system itself okay now yeah the difference between ext3 and 2 is that ext2 is like the original file system ext3 adds journaling on top of that in order to give us a level of reliability and if we get that far today i'll tell you more about journaling too okay okay questions no all right so um if you remember the directory abstraction i do want to say tiny bit more about that as well so you you have the root uh directory it's got the usr directory inside of it and that points to say user lib 4.3 and user lib those are separate directories inside this directory points at an actual file etc so directories themselves are basically specialized files in a specialized format and they're lists of file name file number uh mappings okay and there's a bunch of system calls that actually interact with directories uh directly so for instance open or create of a file with a file name actually traverses this directory structure to figure out which one of these subdirectories you're going to put the new file name in there's make durer and remove directory system calls for making and removing directories in a given place there's also link and unlink which can remove just this link so potentially if this particular file foo has two different names in the directory structure you could unlink one of them or you could create a new one we'll talk a little bit more about link and unlink in a second so the question of is the kernel itself stored outside the uh file system it depends on what you mean so the kernel itself is certainly on the file system okay so there's a special boot code that just knows enough about the file system to pull the kernel in off of a special slash boot directory for instance in the root directory uh it's not terribly intelligent but it knows enough to read that through to pull it into ram and then once that's been bootstrapped then it starts booting and it can do the rest of the file system so yes the kernel is actually in the file system which is uh an interesting catch-22 when you think about it because you have to make sure that the boot code that you load from some well-defined place on disk has enough information and knowledge on how to interpret the file system to just pull the kernel itself in yeah that's a that's a great question so um the lib c provides a bunch of support like opendir and reader you guys should take a look at these uh system calls they basically allow you to actually open a directory and scan through it for a bunch of file names to find out what are all the files that are that are in that directory or are the directories instead of files you can do that there's a set of uh of um calls that are in libsy that are there for you i don't know if any of you have actually used them yet but they i've used them many times in the past so what's a hard link so i wanted to tell you a little bit about a hard link this is a mapping from name to file number in the directory structure so uh a hard link is really just a directory entry okay we but i'll show you why i call it a hard link in a second but so for instance in this directory slash user has a lib 4.3 name in there and it matches up with an i number which is the inode for this next directory and so a hard link is really a name i number mapping that's inside of a directory okay and the first hard link is made when you do create and you can actually create extra hard links thereby giving sort of this file multiple names in the name structure with the link system call or the ln user call okay and you can only do that typically if you're a super user unlink will remove this link and if as a result you've got a file that's sort of floating in space and disconnected that will effectively delete the file because uh all of its resources will be freed up at that point okay and so um that leads into an interesting question of when can the file contents be deleted so the answer is so this lib 4.3 foo is a file with some stuff in it it can be deleted if there are no links left to it and uh nobody has it open so once you open a file you also have a link to a file okay and so but it's in memory and so if you have a process that has opened a file and then you go delete it out of the file system that file is still going to stick around long enough for that process to to read or write it and it's only when it's closed at that point if it doesn't have a hard link in the directory structure then it goes away okay so that's a little bit weird behavior i'll tell you uh last term when i was teaching 162 our our first example of a midterm like you guys had was some code that we wrote to produce google forms and uh we had a de-scrambling thing so that when a student would ask a question about whatever their 1.4 was it would tell us what 1.4 was really for us and then any time we put out a correction it went back to everybody who had that scrambled thing okay that was great and it worked pretty well except the server we were running it on it's logged filled up and so the server crashed and then we couldn't reboot it because even after we had tried to just uh delete all of the data in the log it was still being held on to by processes that had uh access to the hard links and uh we it took too long for us to repair it in the midterm ended without corrections for the last like third of the midterm so that was a bad scenario but all right so in contrast to hard links are soft links so this is a soft link or a symbolic link on some operating systems they call them shortcuts and this is going to just map a name one name to another name without it being an actual name to i number mapping so instead it's really a name to file name mapping okay so in our regular directory hard link is a file name and a file number and that's a that's sort of supported directly in the file system symbolic link has says well if you get to this point in the directory and you look something up by this file name what you get back is just another file name and so rather than a direct pointer to one directory down or whatever with a with a soft link you can basically point it pretty much anywhere because you're saying well replace this name with this complete name which could be an absolute name so you can end up in another file system or what have you and so that's typically with ln s is how you make symbolic links okay and the os looks up the destination file name each time the program accesses through and so this lookup could fail whereas this lookup there's always a file name to hard link that's will never fail because if there's some file that's not pointed at by anything it'll go away and so you'll never get a file doesn't exist problem here but in this symbolic link it's possible that you find a file name that maps to something that doesn't exist so these symbolic links are much more uh convenient for predict producing um trees of uh file names that are sort of part of build packages and stuff but all right so that's the difference between a hard and a soft link so there's there are a number of kernel there are a number of kernel facilities that actually will go ahead and use symbolic links as if they're real links so for instance if you do open and you give a long string part of resolving that file name may actually go through different sim links and that will work fine because that's been set up to interpret the symbolic links properly all right now let's look at one last thing about directories i just wanted to show you this one more time so what if we're opening slash home cs 162 stuff.text so the first thing is we're going to have to find the i number for the root inode configured in the kernel we'll say it's 2 for instance we're going to read that inode 2 from its position in the inode array so remember in the outer block group there's going to be an array of inodes we pull that out we examine the inode we find the first block and we start working our way through so we take that inode we take the block we scan through it until we find home mapped to another i number say 8086 okay and then we look up 8086 for slash home okay and that's going to be um another uh inode structure which is going to give us another block which we can look up okay and then finally and yet a third one okay and then finally when we get to um slash home cs 162 we've looked up slash stuff okay and last but not least now stuff is actually pointing at the inode which is actually the file and so what i've got in green here is reading the file so there's every directory is a file just like files or files and what you're doing is you're traversing your way through the directory structure until you actually get to the files of interest okay all right and this little thing i have in the lower right here actually represents uh the block cache which i'll talk about a little bit okay um and the last thing i wanted to mention is remember everything in an operating system is a cache and so because everything in the operating system's a cache then uh we can cache all of this stuff that's what you're seeing here but we can also cache the translations in a name cache and so assuming nothing changes in this directory structure then we have a name cache which say says that while slash home slash cs 162 stuff.txt and also the intermediate pieces are actually stored in a hash table so we can very quickly look that up in memory assuming that nothing in the underlying file system has changed and so that cache of names called the name cache makes looking things up subsequently like for instance if we wanted slash home cs 162 slash other stuff.txt it would be much faster because we wouldn't have to traverse our way through those directories okay what happens when the array runs out of space you mean the name array i'm assuming so the great thing about the name cache is since it's just a cache it doesn't matter if you throw things out you can always get them later if on the other hand you're talking about the inode array and there's several of them yeah okay so there's several inode arrays throughout all the different block groups if you run out inodes you can't make any more files and i've had some file systems in the past that i've made where there were so many little files that we actually blew out the set of inodes and at that point you basically can't make anything new so it doesn't matter how full the actual data portion your file system is that's it it came over all right so um one thing that's unfortunate a little bit about uh and which i haven't really shown you here in great detail is if you have a really long directory with a zillion files in it standard unix forces you to go linearly through the directory to find the file you want and it's not indexed in any way and it's just linear and happens to be there in the order in which it was put into that directory so that's pretty inefficient there are things like freebsd and netbsd openbsd actually have the ability to swap in a b tree-like format for data inside the directory to give you a much faster lookup possibilities okay but that's optional and a lot of and linux doesn't do that and a lot of uh early unix systems didn't do that as well so let's now talk about windows nt or ntfs um because uh this is a little different so this was the new technology file system uh it's a default on modern windows systems this was kind of what came in after the fat file system was uh you know developed and then rejected as uh two um flaky and and really not reliable enough for a big heavy heavily used file system so ntfs came along and what's interesting that's very different from uh the bsd file system we just talked about is we have variable length extents rather than fixed blocks so the file system we were just talking about all the blocks were the same size call them 4k inodes were smaller typically in inode's like 128 bytes so there are several of them in a block in the ntfs there's a possibility of most of the disk space uh being laid out in variable extents where um you sort of have the first block and then um the length and and you'd follow it along a track to get all of the data represented there and so you could really represent a chunk of data that was many tracks worth of data that way um and what's the internal portion of that file system well instead of the fat table like we looked at or the inode array you actually have the master file table which is like a database okay and like a database it has a maximum one kilobyte size entry and each entry is essentially representing a sequence of attribute value pairs but one of the things you can have is data and so you can have an attribute value pair which represents data and the value is the actual data and so this is an interesting twist because it allows you to have both the metadata describing who can read and write uh the file and the data itself all in one chunk and one kilobyte size entry unlike what we've been talking about with the fast file system okay and so every entry in the mft has metadata and the files data directly or a list of extents or for really big uh files pointers to other mft entries with more extents okay and rather than worrying about whether you got that all i'm going to show you here in some pictures so here's an example of the master file table it's like a database like i said and so there's a series of these records and these records represent pointers or these records have within them both metadata and potentially pointers to longer extents okay and block pointers basically cover runs of blocks now instead of individual blocks okay and this is actually similar to what linux did in the xt4 as well and the other uh interesting thing about ntfs is when you create a file you can give it a hint as to how big the file is going to get so that it can pre-allocate a big chunk of contiguous memory for you so this has the ability to be higher performing under some circumstances i mean it also has journaling for reliability we'll discuss that a little later so here's an example master file table here's one entry there's some standard info is one of the attributes you can have and that's basically the stuff that we put in the inodes for bsd file system things like create time modify time access time who's allowed to access the file et cetera there's the name of the file which is included in this record so that file name is kind of like uh you know this is kind of like in the directory right so this is a the actual file name and then there's a bunch of data which could be resident and all of this could be together in a single one kilobyte file and so one kilobyte master record excuse me and so as a result it's very much more efficient at small files than the vsd file system is because everything's all together you don't have to have both the inode and the data now if we get bigger files then what we can do is rather than just having the data in this part of the mft record we can start having pointers and links to bigger extents that are spread throughout the disk okay now hopefully those of you that remember when we discussed fragmentation back in the memory days when we're talking about virtual memory this as you can imagine since the we're not using blocks that are all the same size but rather extents that are variable size now we all of a sudden have this problem with potential fragmentation when we start allocating freeing allocating freeing and so um that can be a problem okay and so on on an ntfs file system you can actually start getting some fragmentation over time here's an example of a very large file where we have some of the master file records pointing at other master file records and then each of those individually pointing at extents and so you can make really really really big files if you want again at the potential expense fragmentation problems uh here's an example of a huge fragmented file okay with lots of extent spread all over the place one of the problems once the space becomes fragmented is you can no longer have big extents you have to have lots of small ones and so when you finally get extremely fragmented file system then things don't perform well at all and you have to go through and defragment okay so other little things about ntfs is the directories are b trees by default the file number for a file is really its entry in the master file table master file table always has file names part of it so the human readable name file number of the parent directory etc what's kind of interesting as well is if you have multiple names hard links for the same file they can be in the master file record so looking at one record you can know all of the individual names and what directories they're in as well as the data okay okay now let's talk about memory mapping now for a moment um so memory mapped files um are a different way to do i do i o so we've been talking about that interface where you open a file and then you read and write it then you close it you can this involves multiple copies into caches and memory and system calls etc what if instead of open read write close we just map the file into memory just like we map other stuff into memory when we were talking about virtual memory and then you can just read and write memory locations and you implicitly page the file in and page it out and so on and so what file manipulation suddenly looks like memory reads and writes okay and this is something that is a well-defined interface executable files are treated this way when you exact a process what happens is you actually just point the virtual memory to where that process is on disk and then it'll start faulting parts of the code in as it's needed okay so here's how that works so if you remember by the way this is a slide you've seen but let me remind you of how virtual memory works since i know we've passed midterm two and so everything you learned in the first two thirds of the class is now fuzzy but if you remember we basically have an instruction access tries to get looked up in the mmu if we're lucky we find an entry in the page table and that uh goes ahead and lets us do the access on the other hand if there isn't a page table entry and we get a page fault then what happens is we get an exception we go into the kernel page fault handler takes over it starts scheduling a read from disk which can take time and then eventually when that's finished it updates page table and then we go back on the ready list we get rescheduled and we try it again and this time we succeed okay so that's virtual memory so why not do the same thing with just regular files so here's the idea so you use a call from the lib library called map which is a system goes to a system call and what it does is it says well here's a file map it to a region of the virtual address space which really means we create a set of virtual address space pointers that point at the file and now we go ahead to read and if we try to read from the file region here in blue and nothing's mapped we'll get a page fault just like we did for a virtual memory give it an exception now what happens is we read a part of the file into dram and then we get to retry and at that point we go forward and read the contents from memory the mappings are set up and um you know we just read from memory and we get stuff out of the file so what's neat about mmap is it actually lets us take a file put it in memory and now all of a sudden we're accessing it as if it was just data in memory not on the disk okay now if you were to look up map you could do man on it whatever here's a variant of it where what you do with the map call is you give it an address in your virtual address space where you want to put the file you give it a length of how much of that file you want and then some other flags and it let basically let you map a file into a specific region now if you don't care where it is in the address space then you just put a zero in here and it'll return as a void star it'll return an address where it decided to map it in your virtual address space okay this is perhaps close to what you're supposed to do for project three um this is used uh both for manipulating files and for sharing between processes so i wanted to show you an example here so here's some code that um is very simplistic but if you notice i've got a static global here called something that's set to equal to 162. i've got some things on the stack and so what i'm doing with printing to the console as i'm saying that the data is where well i'm just telling you where the static part of the data is so that's uh the address of something the heap is that i give you what i get back from malik if i mount like a byte and then the stack is at uh this address of the m-file um variable and that kind of tells me where the stack is and then what i do is i print that stuff out and then i open my file which is an argument and assuming that everything is good i get down here and notice what i did for my map i said uh find me an address i don't care what it is that's what will come back for m-file i say um length 1000 i say allow reading and writing and then this these variables here or these um flags are basically saying go ahead and map this as a file and here's the file descriptor i've already opened okay and so if you look if i run this thing notice what happens it tells me where data heaps stack and map is at so map is the thing that comes back from our our mapper and notice how map is low in memory it's not high in memory like heap or stack and which is which is interesting there right so um the thing that we get oh by the way so it prints those things out and then it tells me what was in the file test let me back this up okay but notice once i've uh printed out what was in that file notice how i did that i just said printf mmap is at and that gave me this guy and then just by saying put the string m-file on the screen it just printed out all the contents okay so that's this is line one this is line two this is line three all of those contents got printed out on the screen just by printing uh the the string that was at that variable okay so this line here is innocuous as it is this puts foots line is actually doing a read from the file and printing it on the screen just by pretending that that string was already in memory okay and then this is where this gets a little amusing we uh go 20 characters in and we write over it with something by string copying this string over that spot and then we close the file descriptor which is going to flush everything out and we return and what happens when we cat test and see what's in there is notice if we were to count 20 we would see that starting at by 20 is the let's write over its portion and so we've actually literally written over this part of the file simply with a string copy okay good now we could also think of this as a way to share so we could have a file in memory and we can map that file into different places doesn't matter they could be the same or different places in the virtual address space and once we've done that now we can share in shared memory with the file as kind of a backing store for what we've got okay now you can kind of see that the file is a little bit uh extraneous here in some sense because we maybe we don't care uh what's in the file we're just using that as an excuse to set up this channel in shared memory and it turns out that there are ways of setting up something called anonymous memory so we did map file in the earlier example i gave you you can map anonymous which will mean do this kind of setup but without the file okay all right so different processes have different address spaces yes so this part here isn't so take this part of the file it goes to a different part of virtual address space 2 than virtual address space 1 given the way i've shown this this is shareable the reason this is shareable is not because the virtual addresses are the same but because data i write here that shows up there is data that this guy can read yeah and and again notice the irony of what you said there about this being through the file is it it is through the file but it doesn't even have to go to disk for this to happen okay now the um if you wanted to do this for real uh a you might want to use anonymous memory if you really didn't care what the backing store was but the other thing is you're going to want to find out what the address was you got allocated on one of these and try to use that in the map on the other so that you can actually align the virtual address space portion of this as well so that then you can actually have shared lists and other things in uh in that shared memory all right good so the kernel has to keep this interface going where the user portion thinks that everything's bites and the uh and the the disc is in blocks and so we've got that mismatch to start with and um basically the kernel has to pull things off of the disk and put them into memory to do that matching so if i'm going to read four bytes at the beginning of the file it's not going to read four bytes off disk that's not even possible it's going to read a whole block of 4k put it in memory and then give you the first a few block a few bytes and the good thing is if i keep reading i don't have to go to disk again until i run out of that block so that seems like uh maybe we ought to start talking about caching here so that multiple processes for instance can share data that's come off of the disk okay and again just because um you know operating systems as i said everything's a cache so the buffer cache really is this generic cache of blocks in memory okay that's separate from virtual memory so it's not this is not the blocks that we choose to use to help map virtual addresses but rather this is a set of blocks purely as a cache and it can hold things like data blocks and inodes and directory contents etc for future use and it can also have dirty data so if you write a block in a file it can actually have that data sitting in there uh before it goes back to the to the disk okay and so the key idea here is we're gonna now set aside some dram to help exploit locality by caching disk data in memory and really help us okay name translations so mapping from paths to inodes disk blocks mapping from block address to disk content etc and as i mentioned this is called the buffer cache and it's really memory used to cache kernel resources including disk blocks and name translations and can have dirty data okay so let's look a little bit at this i just wanted to give you an idea so here's our disk surface that we had earlier so the buffer cache really is a set of blocks in memory there's some state bits associated with it and because it's a cache some of these blocks might be free or some of them have been invalidated and really if i were to abstractly think about what i've got in my cache i could say well i've got some data blocks i'm going to think of them as here i've got some inodes i've got some directory data blocks i've got some the free bitmap which is actually uh the set of all blocks that are free are kept in that bitmap and so um this is a cache on the disk but specifically for access through the file system all right and for instance uh when we have file descriptions of open files from that are associated with uh process control blocks and file descriptors those are really pointing at inodes which are locked down in the cache so that when i go to read from the file i can immediately find which blocks i need to read from okay and so this file system is really support it's not a direct uh access to the disk what it is is it's supported on top of the buffer cache and we pull things in and out of the buffer cache as we need them okay and that's really how we do that mapping between byte level operations and and even operations in the inode to the block level interface of the disk so for instance let's suppose i'm trying to do an open operation so i'm going to assume i've got my uh inode for a directory that's my current working directory that's already open here i'm pointing at uh this and so this current working directory is uh is an inode i've pulled in previously and so what i'm going to do is i'm going to try to look up some other file name relative to that so that i can do an open and so what i'm going to do is i'm going to wash and repeat i'm going to try to load blocks of the directory i'm going to search through there to find the next directory pointer then i'm going to load blocks from the next one and so on and uh and so this is sort of a recursive process and this buffer cache we have to for instance mark a block as transiently in use when we start using it then we can pull in data off of the directory and now that's cached here and then i can search through it to find the named inumbre mapping okay and now i can look up that i number to start reading the data so what i'm going to do is i'm going to put aside a marker here saying this part of the cache is in use i'm going to read the data in and now i've got an inode cached and i can map that to a file description and so now i have my open file uh and its inode is locked down in memory okay and so then from that point on now i can do reads for instance well i've got the inodes so i've got data blocks i can pull the data blocks into the cache and then use them and um you know i'm going to traverse the inode so this thing in green is an inode like we talked about on previous slides and so it's going to let us know which blocks i need to get next i pull them into memory and now i can access them or maybe they're already in memory okay and so this is typically this buffer cache is is got a hash table that lets me find blocks in it very quickly and so that's how i can figure out whether i've already got the blocks in the buffer cache or if i have to pull them off of disk and of course for writing what i've got here is i might actually have a dirty block that basically says this data has been updated relative to the disk and i can't get rid of it until i've written it back to data and so this buffer cache also has to keep track of what's dirty and what's not okay so it's this is uh implemented entirely in the operating system in software it's not like uh memory cache is in the tlb it's it's a little different because it is in software we always have to enter the kernel to do file operations as opposed to when we were talking about virtual memory where we had to do reads and writes from the um you know in hardware and so we needed to put a hardware interface to keep that fast blocks go through transitional states between free and in use being read from disk being written to disk etc so that as multiple processes are all reading and writing the same data they have to be careful uh to make sure that they don't stomp on each other or take away a block that's there waiting for uh data to come back from disk okay many pr different purposes for this as i've already mentioned um and uh when the process exits things may stay in that cache uh indefinitely unless they've actually been flushed out okay so what do we do when we fill it up well at that point we need to start finding free blocks and of course we all know that if they're read only we can just throw them out if they're dirty we have to write them back first okay so what's our replacement policy well we could do lru and in fact most uh folks do you can afford the overhead of full lru here because we can link blocks together and know what the oldest one is and the most recent one and so on because we always have to enter the kernel so the number of instructions to do full lru is small relative to the overhead of having gotten into the kernel already this works very well for all sorts of things name translation it fails if you ever have an application that scans through all of the files on disk you should try this sometime just for the heck of it don't do it on your friend's computer while they're trying to use it but if you say find dot that's the current directory and then you say exec grepfu et cetera you can actually and then slash colon this is this will go through all of the files in the sub directory and uh and grep them okay and so there you're gonna blow out the cache if you have lru and so um some operating systems give you the ability to say for these following file accesses uh do it just once um don't don't even bother putting them in the cache because i want to keep the things that are in the cache there so how much memory should we put in this cache remember this is separate from the backing store that we use for virtual memory and too much memory and you won't be able to run many applications because you don't have any virtual memory too little in the file system and applications run very slowly because there's not enough caching at the disk level and so the real answer is you adjust this boundary dynamically between the buffer cache and the virtual memory and that's pretty much the way modern operating systems work there was a time when i first started building kernels where you actually had to set a constant at build time to figure out how much you put in the buffer cache versus in the file cache unfortunately that's dynamically figured out now so once we've got a cache like this now we can start thinking well maybe i should try to avoid uh try to avoid avoid coldnesses all right and how do i do that i can do this pre with pre-fetching okay and so the key idea here of course is exploit the fact that most common file access patterns are sequential and prefetch subsequent disc blocks and so most variants of unix and windows also basically say that if i read a disk block i'll read the next couple into the disk cache into the buffer cache and as a result you know i get far fewer times where i'm held up by having to wait for the disc okay and the other good thing is even when you've got a bunch of prefetching from a bunch of different processes if you have an operand a goodly could if you have a well operating elevator algorithm um then all of those axises can be rearranged and the head movement can be managed to scan its way through the disk all right and so prefetching uh isn't as bad as it seems because what what happens is all of those prefetches from different processes all get reordered automatically how much to prefetch well if you do too much then you're going to start kicking things out of the cache unnecessarily and so um too little you have a lot of seeks and so usually it's a couple of blocks or the automatic pre-fetching so delayed rights so the buffer cache is a write-back cache writes our term delayed rights here and what does that mean it means i do a write to a file it sits in the buffer cache it's not necessarily pushed back immediately to disk okay so write copies data from user space to the kernel buffer cache returns to the user quickly seems good okay read fulfilled by the cache so read see the results of writes so it doesn't matter that i haven't put the data on the disk yet the reads have to go through the buffer cache as well and so any data i've just written i read back as if it was put on disk and so from the standpoint of an interface i don't know the difference okay that's that transparent access of the cache is is a good thing but start wondering i'm sure as you're sitting there when does the data from right actually reach the disk and the answer is well clearly if the buffer cache is full and we need to evict something yeah that's fine but when the buffer cache is flushed periodically maybe we want to um excuse me maybe we want to be flushing periodically because the more dirty data we leave in the cache the more chance it is that we'll lose some data so in fact even if uh the cache is being used very well and we have lots of processes all writing the same disk blocks and they're staying in the cache that may be fine to not want to push it out to disk but uh if the system crashes we just lost a whole bunch of data and so there's a periodic flushing that's going on in uh any system that has delayed writes and so we don't have to wait to run out of space in the buffer cache in fact we uh we typically periodically flush it out okay and that's actually about a 30 second time frame is a default and a lot of operating systems that are unix style i'll mention that in a moment so the advantage is of course is you return to the user very quickly without writing to disk the disk scheduler has enough interesting writes for instance that it can reorder and do a really good job when it decides to finally put them on disk of not moving the head too much so that's good we might able be able to allocate multiple blocks at the same time so this is also good so if you think about what i told you earlier you open or you create a brand new file and you start writing and the file system doesn't know how big your file is going to be well with the file cache you can actually allocate as they're writing a bunch of things in the file cache and you can defer even finding physical blocks for that data until it's time to flush it out and at that point you can make sure you have a long enough run in um on some block group somewhere to handle all of the data you've just written rather than sort of dribbling it in one at a time and trying to find a big run and the amusing side effect of this is you've been doing plenty of builds over the years over the last uh term i mean um where you've done make and what happens here all of these files get created and deleted and created and deleted an amusing side effect of the buffer cache and delayed writes is that some of these very temporary files may never even need to go to disk because they're created and deleted before anything gets flushed out okay so this is these are advantages of rights um so the replacement policy uh in demand paging is really not it's not feasible to do lru as we discussed because you'd have to readjust on every read or write in hardware and so we use an approximation like not recently used or the clock algorithm the buffer cache lru is okay because we only enter the buffer cache when we're actually trying to do disk reads or writes so that's a little different management to those two parts of memory the eviction policy of course is that when we're doing demand paging we evict a not recently used page when the memory is close to full the buffer cache we want to be writing these dirty blocks back fast enough that we don't lose any data but not so fast that we don't get some of these advantages i was just telling you about and so that's always a little bit of a trade-off but you can imagine that uh if you're paranoid about your data which a lot of people are maybe this is just not enough so this idea that we're going to flush every 30 seconds means that when you crash you might have lost the last 30 seconds of your information this is certainly not a foolproof way if you flush every 30 seconds therefore of keeping everything around and so um even worse is if the dirty block was for a directory you could lose all of the files you've just created in the directory in the last 30 seconds just because you didn't flush the directory out so that seems pretty bad so metadata like directory data is uh even more sensitive to being lost than the data itself okay so the file system can get in inconsistent states all sorts of bad things happen so take away from this discussion here is really that file systems need recovery mechanisms and ways to protect the information even as we're trying to use a cache to give us good performance and a lot of other benefits we need some way of preventing our loss of data and this idea of flushing 30 seconds you can say i'll flush every 15 or i'll flush every 10. it's still not quite enough under a lot of circumstances okay and it depends on how sensitive you are to data loss but maybe we need something additional to what we've got so far so that leads me to talk a little bit about illites um so there are three leds that i like so one is the availability which is the one you probably heard a lot of and this is the um the probability the system can accept and process requests so it's often measured in nines of probabilities so like for instance 99.9 of the time means three nines of availability okay the key idea here is independence of failures and that um the pro the system can accept and process requests however one thing you probably didn't know is that availability doesn't mean the thing works properly it just means it responds okay and so availability is something that you often hear quoted oh this is great i've got five nines of availability okay but is it actually still working okay and so that leads to two other things that i think are very important to point out in this space so durability is different from availability durability says that um one certain data will never be lost okay and so i'm gonna the durability of data is the ability of the system to recover it under a wide variety of circumstances and it doesn't necessarily imply availability so it's different okay and you know i like to think of the pyramids in egypt from for uh centuries there were all these interesting hieroglyphs on there and nobody knew how to read that data but boy that data was secure because it was uh carved in stone literally and didn't go away so it was highly durable but it wasn't available to anybody and then of course the rosetta stone was found which let people read it and so now the data was available again so if you're ever trying to explain the difference between durability and availability i think the pyramids are a good example there um the third thing is really what people mean i think when they want to brag about availability they really want to brag about reliability which is the ability of a system or component to perform its required functions properly okay and this is actually there's an ieee definition it's certainly stronger than availability it means the system's not just up but it's working correctly correctly and so it also includes like availability security fault tolerance durability et cetera and so really the interesting question for me in file systems is a durability because you know once data is lost it's never recoverable so that's really important and then also reliability is the system reliable or not okay and that that 30 second flush we were talking about is uh is a mechanism for performance you could almost say it's a mechanism uh maybe that improves availability because there's less work being going going on there i'm not really sure i would say that that way it's really not a good mechanism for durability and it certainly isn't a good mechanism for reliability and so it's it's like a very simple heuristic and so what i'd like to do we're not going to get through all these slides today but what we're going to do this time and and the next monday is we're going to talk about how to get durability and reliability out of a file system so let's talk about durability for now so how do you make files system more durable well one thing is uh you got to make sure that the bits once they're on the disk don't get lost and so disk blocks disk sectors essentially have reed solomon coding on them which means that if i'm going to write a four kilobyte block of data i'm actually writing more data on the disk and that extra data or redundancy together with the original 4k data makes that makes it possible for me to recover bit errors that happen in the middle of that block okay and it basically allows us to recover data from small defects in the media and if you think about when we talked about shingle uh recordings and we talked about how close the tracks are and one terabit per square inch i don't know if you remember all these numbers we talked about a couple weeks ago the it's really easy for noise and local heat and whatever to cause read errors and so you absolutely have to have really good error correction codes on the disk in order to recover your data that's operating all the time okay and the second thing and so that's that's just part of the disk design um the second thing is you want to make sure that writes survive in the short term so when we write stuff to the buffer cache and we leave it there that doesn't necessarily make sure it's durable in the short term because a crash will immediately remove it so if we start getting really paranoid we could either abandon delayed rights entirely which is going to have a huge performance hit or we could do something like battery backed up ram that's called non-volatile ram or flash etc that's actually associated with the file system where we put things until they get pushed out to disk okay and a lot of hybrid disks these days actually have flash on the disk and as a result you can do a really quick write to the flash memory on the disk and it will worry about getting it eventually on the spinning storage and so that's another approach to making sure things are short-term durable okay so once we've got read solomon codes and then we make sure the data that's written but not yet on disk is stable that's a good start but now we have to start worrying about for instance what if the disk fails okay so how do we make sure things survive in the long term well we need to replicate more than one copy so and the importance of this is independence of failure so we could put copies on one disk so that we have different copies on the same disk the problem is if the disk fails that didn't help us much right so we could put copies on different disks but if the server fails maybe the disks fail too we could put copies on different servers okay but if the building's struck by lightning that doesn't help us we could put copies on servers in different continents or we could have a copy uh you know in our archival store on the moon that we beam up there with a laser or something there are many ways to deal with this but what we want to do when we when we're really trying to be paranoid is we want to put our copies out in a in places that have independent failure modes so the problem with uh different servers in the same building is that if that got struck by lightning and fried all the servers in that building that's not independent failure mode okay so um to now back us down from worrying about lightning here for a second so i'm sure you've heard about raid i just wanted to remind you so one type of redundancy that's very easy to use these days is what's called raid 1 that's from the original patterson naming scheme which is disk mirroring and so the idea is that every disk that we have in the system actually has a partner that uh we put the same data on okay and what's good about this is to call it the shadow disk and so every time you write you actually write two copies of the file system and they go to both disks and what's great is from a high i o standpoint i can read back at twice the bandwidth i did before because i've got two copies okay this is the most expensive way to get redundancy at the disk level because we're we need 100 extra uh data storage so the bandwidth is sacrificed a little bit on the rights because we have to synchronize our two pieces of the file system and so on um reads can be optimized and recovery is fairly simple because if a disk fails you just replace say the pink one here failed i just put a new pink one in and i copy from the green over to the pink and as soon as that copy's done then i'm back up to go and one of the ideas that people use sometimes is what's called a hot spare which is a disc that's just sitting there in a power down mode ready to go as soon as a disk fails okay and you can buy um if you buy anything bigger than a laptop these days you buy a desktop or a small server or whatever raid one is something that you can easily order from dell or from whoever you buy your computers from and they just put in two disks and they set that up and it just works and i i've gotten saved by that many times over the years where i have all this configuration and a disk failed on a brand new machine i just called them up and they sent a new disk out and i plugged it in and it was as if nothing bad ever happened so that's kind of cool the downside of course of this uh just give me a couple more moments here and we'll be done the downside of this of course is this has got a hundred percent overhead so another option is so raid five and so raid five again this doesn't have anything to do with number of disks this is just in uh patterson's naming scheme but in this instance here we take a set of disks and um i'm showing you five disks here uh which doesn't have anything to do with raid5 but what you notice is that at any given time we have four of the disks blocks say this is the block zero on disk one block zero and just two block zero and disc three block zero and disk four they're all xored together to produce a parity that's on disk five and we do that for every block and so now this is much more efficient from a storage overhead standpoint because really my overhead is only one out of five disks as opposed to here where my my overhead is one out of two disks okay and so um basically all of these groups together is called a stripe unit they're all written potentially at the same time um we get increased bandwidth in writes and in reads potentially if we do this right this green block is gotten by taking all of these data blocks and xoring them together to produce the parity and we notice that i rotate the parity through and the reason is that the parity block is uh kind of a high contention point if i'm trying to overwrite a small amount of disc block 2 i actually have to read the parity read the disc block write back just block 2 right back to parity and so that parody gets highly used uh over the other disc blocks and so we just rotate the parity through okay and we can destroy all of the data in one complete disk and get it back how does that work well we just say oh i lost everything here so now if i put a new disk in there how do i get back all that data well it turns out i can just xor d0 d1 d3 and p0 together and i can get back d2 okay and the way i've described this is like in the same box but we could actually spread this across the internet uh and have each one of these disks in a different cloud storage area and we could make for very stable data all right questions okay has everybody seen the raid technology before i think they talk about that in 61c maybe i'm not sure good so um what i want to close with is i want to tell you that raid 5 just isn't enough okay so and in general all of these raids are called erasure codes which means that i know for a fact that uh this disc is dead how do i know that well i know that disc is dead because either the motor doesn't spin up or all of the error correction codes of all the data on the disks on the disk is uh they're failing and so we just know the data is gone and we can't recover it that's called an erasure and these codes are erasure codes so what what does that mean here that means in this instance i erase this whole disk and so when i reconstruct that data i don't try to get anything off the disk instead i xor the other four disks together to get it back because this is effectively erased and so these are all erasure codes and today raid 5 which can replace one disk is not enough because new disks are so big that if i was doing that recovery process by the time the recovery was done i might have had another failure in the meanwhile and so you actually need something like raid six or higher in today's disk which allow two to fail okay and i'll talk more about this uh next time but notice that what we've got is we're talking about durability how to make sure the bits once we've got them are stable we haven't talked about reliability which is going to be more interesting and get us into transactions as well so in conclusion we talked a lot about file systems how to transform blocks into files and directories we optimize for size access usage patterns we're trying to maximize sequential access by finding big runs of empty blocks and the os protection and security regime all of those that metadata is in inodes typically and so it's associated with the file not with the directory okay so the file is defined by the inode we talked about tran naming is actually working our way through the directories to find the i number of the file we're interested in and that naming could either be in each directory could either be linear linear or it could be a b tree of some sort we talked about 4.2 bsd's multi-level index scheme which is currently used in linux and several others we also talked about ntfs as an alternative we talked about file layout and how to do free space management in block groups we talked about memory mapping with map we talked about the buffer cache which can contain dirty blocks that have to be written back properly uh and we talked about multiple distinct updates well actually let's leave it at that for now so um i think i will wish everybody have a great uh holiday on wednesday and we'll see y'all back on monday you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_15_Memory_3_Caching_and_TLBs_Cont_Demand_Paging.txt | hello everybody welcome back to uh cs 162. we are um going to pick up where we left off and uh that is uh talking about caching just to remind you a little bit uh from 61c and before we get there though we've been talking a lot about virtual memory and one of the things i wanted to show you was what i like to call the magic two-level page table which is uh works when you have 32-bit address space 4 byte pte's page table entries then you can do this 10 10 12 pattern you have the root of the page table pointing at the root page and then the first 10 bits basically select one of a thousand 24 entries and uh then that points to the second level page table the next uh 10 bits point to an entry there one of 1024 entries and then finally we point to the actual page and of course on a context switch you have to save this page table pointer and that's it because the rest of this is in memory uh and there's valid bits on the page table entries so you can in fact page out parts of the page table if you don't need it and so you can basically just set uh an invalid bit in the top level page table entry and then you can page out the second level all right were there any questions on this so this works particularly well when you have 12-bit offset which is 4096 byte pages you have uh and four byte page table entries okay good now by the way there was a little question on uh piazza about paging out the page table or putting it in virtual memory i'll say a little bit about that later but i do want to point out that the nice thing about this particular layout this is all done in physical space physical addresses are actually in the page table in this case you still can page out part of the page table so it's pretty close to being like virtual memory except that the actual addresses are physical ones so the other thing we were talking about was we talked about the uh transit uh translation look aside buffer tlb and we've talked about it as looking like a cache and so when you get a virtual address coming out of the cpu we quickly look up in the tlb to see whether uh that virtual address has been cached and if the answer is yes then we can go right on to physical memory and by the way this physical memory here uh can be a combination of cache and dram or whatever and uh this can be very fast it can be at the speed of the cache for instance on the other hand if the tlb doesn't have our virtual address in it then we have to go and translate through the mmu which usually involves walking the page table once we get the result back we put that in the tlb and then we continue with our actual access and subsequent ones assuming we haven't kicked that address out of the tlb will be fast and of course the cpu in when you're in kernel mode can basically go around the tlb to uh look at things in in the physical pages okay and of course the question is does there exist any page locality that could make this work because the only reason the tlb would work as a cache on addresses is if there was an actual locality and basically what i said here was well instructions clearly have page locality stack accesses have locality and so really it's a question of the data accesses which also have locality so in many cases they don't have as good a locality as the instruction in stack but it's pretty good okay and can we have a hierarchy of tlbs yes so if you remember from 61c you could have a two level cache a first and second level so you can also have a two level tlb okay and i'll say a little bit more about that later so what we're going to do now in the early part of the lecture is i want to remind you a bit more about caching and and then we'll talk about tlbs and then eventually uh maybe about halfway through the lecture then we'll change to uh how we actually use the tlbs and page table entries to get uh demand paging and a few other really interesting aspects of the the memory system so we uh at the very end of the lecture last time we were starting to remind you of sources of cash misses and uh i like to put these down i call these the the three c's plus one because mark hill uh when he was a graduate student here came up with these three c's representing the source of cash misses and he went on to be a very well-known computer architect faculty member in university madison wisconsin but um the three c's that he came up with in his phd thesis were compulsory capacity and conflict a compulsory miss is one that basically has uh represents an address that's never been seen before and so there's no way the cash could have had it in there because it's never been accessed the only way getting around a compulsory or cold mist sometimes it's called is basically if you have some sort of prefetching mechanism that can predict in advance then you could possibly get around compulsory misses the other two capacity and conflict are uh more interesting so the capacity miss basically is a miss that occurs because the cache is just not big enough so you put your uh your data into the cache the cache is giving you very fast access for a little while and then eventually you try to put too many other things in there and you kick something out and so when you miss again that's going to call the capacity miss because the cache is too small now that's a little different from a conflict miss uh and a conflict miss is a situation where it's not that the cash was too small but that uh the places the slots and the cash that you're allowed to put something were too few and so you put something in the cache and you're happily using it but then you put another couple of things that were in the same slot and i'll show you that in a moment um just to remind you again and it kicked it out and then when you go back and miss again it's a conflict miss okay and of course um the the plus one portion of this is coherence miss this was not talked about mark hill's thesis but um i like to put this down just to remind you that this is another source of miss where if you have a multi-core let's say with two processors two two cores one of them loads something in the cache and it's reading it the other one goes to write that item the only way to keep the cache actually coherent is for an invalidation to kick it out of the first cache so that it can be written cleanly in the second one and at that point when the first processor looks again it's a miss and that's a coherence miss all right so um do we have any uh questions on this okay these are all reminding everybody of 61c i hope so um now when we're using a cache this sort of generic uh way of looking at addresses is the way that uh we often like to think of it so this full width uh from left to right is the number of bits of an address so for instance in a 32-bit processor this might be 32 bits the bottom portion let's say five bits represents an offset inside a cash block okay and so once you basically store things in the cash at a minimum size maybe 32 bytes or in a modern processor can be 128 or 256 bytes that all of those bytes are pulled in at once or kicked out at once and so the block offset really just says well once i found a block in the cache where do i access it from the next two fields the index and the tag are somewhat more interesting so if you imagine the bottom part is within a block then the the top two pieces are about finding the block for you the index basically is uh selects the set and then the tag sort of says well within this set of possible cache entries let's see which one might match the one i'm looking for okay so the block is the minimum quanta of caching it's the minimum thing that goes in and out think of that as for instance 32 bits excuse me 32 bytes or 128 bytes and many applications don't really have um the data select field in them because actually this is sort of the whole quanta goes back and forth but if you look at a processor accessing a single byte then you need the offset the index is used to look up things in the in the cache and identifies what we'll call the set for you that are remembering and then the tag is used to show you the actual copy so um let me give you this in figures because that's always easier here so let's look at the first things called a direct mapped cache and a direct map cache is a two to the n byte cache uh for instance where the upper the uppermost 32 minus n bits are the cache tag okay and the lowest m bits are the byte select so let's take a look at this so here's an example where we have a one kilobyte direct map cache with 32 byte blocks okay so if the blocks have 32 bytes in them we know that there are 32 entries how many bits do we need to uh represent 32 entries quick do your log base 2 everybody five very good so um here's the layout of a direct map cache so what i'm going to do is my cache data has room for 32 bytes uh it has room for a tag and it has room for a valid bit and there are some number of these okay and what we do is we take this address which has five bits as was mentioned by folks on the chat okay and uh the cash index is uh going to be used to look up in the cache and then we're going to match the tag so the first thing we do is we take the index and in this case there's five bits of index and so that's going to select one of 32 cache lines and um then once we've picked a target cache line then we'll check and see the tag match or not and if the tag matches then we know we're good to go and at that point we can actually say well this cache line is valid so i'm now going to use the byte select to pick which of the 32 bytes and i'm good to go okay and notice that there's only one slot here uh to put in the cash for something that has this index okay so that's why it's called a direct mapped cash because you give an index that only gives you one possible cash line and then that cash line or cash block is then a single tag is matched okay does that remind everybody about how this works okay so now let's go a little further on this unless there are questions okay this is pretty straightforward and again again the reason it's direct mapped is there's only one full cache line that comes out of the cache index so here's a set associative cache where we do oh and the other thing i want to say about this is notice that we have five bits in the byte select and five bits in the cache index so that's a total of ten bits which is 2 to the 10th is 1024 which you all have memorized uh quite well now and that's a one kilobyte cache so our cache total if you add all this up there's 1024 bytes in there and that's because there's 10 bits that are selecting it okay now let's go to a set associative cache where in general we could have n-way i'm going to show you two and i'm going to make the same size cache total but i'm going to do this as a two-way set associative so we're always going to have in this set of examples five bits down below but now instead of five bit index we're going to have four bits and the reason for that is we have two separate banks of cash and so that index actually selects two different cache blocks one from the left bank and one from the right bank and then once we've got it now we got to compare the tag with two different tags and so that'll say which of these two lines that are in the cache represent what i'm looking for okay so this index now is selecting two things i check the tag on both sides as a little comparator and assuming things are valid that's why i'm checking the valid bit then if i match i get a one out of here that selects for the mux and so in this example um the left one matches the tag the right one didn't and i get data out okay so that's a two way set associative cache two ways uh set associative because we have a set that's got basically it's not direct mount okay questions good reminding everybody of 61c i hope oh yes you have enough questions so go ahead type questions into the chat please oh okay thanks so um go ahead okay so why we call this the cache tag the cache tag is basically all the other bits okay it's basically everything that is uh not the byte select or not the offset and the cache index the rest of that's the tag and you need to check that the tag matches because that's how you know that this block is the one you're looking for as opposed to representing some other part of memory okay and so this tag will be big it's going to be basically everything else and the tag is not in the data it's it's separate from the data you could think of this as metadata on the cache okay we good on that everybody so now um well we could do this uh arbitrarily where we keep shrinking the index and we have more and more banks until we basically have zero bits in the index okay and that's called fully associative it looks kind of like this okay so um here we have uh 32 places that in the cache 32 blocks just like we did before but they all have a tag and they're all checked in parallel so notice we take the tag which now is 27 bits because we totally eliminated the index and we compare with all of the tags and we pick one and that's the one that's going to select for us which cache line is valid okay and so think of this as the extreme case of a uh set associative cache where we completely get rid of or we we have one complete set which is basically the whole cache okay now can anybody tell me uh which one of these associativities either direct mapped two-way set associative or fully associative is faster and why okay i'm seeing a bunch of people saying the direct mapped is faster but look it's getting all of these hashtags in parallel why is that not faster by the way you're right direct mapped is faster does anybody know why direct mapped is faster okay so now i'm seeing some folks are kind of on the right point here but it's there you go last person got it here it takes a long time to propagate so you got to think like hardware not like software so first of all what you see here on the screen here the fact that we're checking all the tags doesn't take extra time because it's all happening in parallel okay so you gotta you know the cool thing about hardware and you know i'm a hardware architect so i think it's cool but is we're not we don't have to do this serially one at a time we're doing all this in parallel so you might think off the bat that the fully associative was faster but in fact what i didn't show you on the screen is once i've done a match then i have to take this data and i have to um select from it uh in parallel based on the matching of the tags and so i'm selecting sort of one cache line out of 32 which is slower because it's multiple levels then this other case which i'm selecting one of two which is yet again slower than the direct mapped case where i don't have to do any selecting at all so direct mapped is actually faster in hardware okay and the other thing i will point out is that fully associative because it's so much bigger we're checking all of those uh tags in parallel actually takes up more space on the chip and as a result there's speed of light issues and so it takes a little longer for the signals to get around and so the fully associative is actually slower as well because it's bigger okay so propagation speed uh and size of things on the chip actually can matter when you're thinking about hardware so the thing to keep in mind is direct mapped is faster but notice there's something interesting about the direct map cache i there's a whole bunch of possible addresses okay that will all map on top of the same line here in fact anything that has the same index basically all of the well how big is this this is got well we took out 10 bits it's got 20 bits so there's a million addresses that all fit in the same place in the cache and so if i access any of them that's called a conflict miss so when we were looking earlier about conflicts i get a lot more conflicts with the direct mapped because there's only one place that i can place those million cash lines that have the same index whereas in the two-way there's two places i can place it so that's less conflicts and then finally in the fully associative there is i can put it anywhere and so there's basically no conflict misses in the fully associated okay good so hopefully that's helpful so now even though fully associative is slower it will have less conflicts in it and so when you start thinking about tlbs we may want to make that decision if the result of a conflict miss requires a miss which is really expensive i might be more have more tendency to want to go to fully associative to avoid misses even though it's a little slower and you might start thinking a little bit about whether a tlb would make sense to be fully associative because the result of missing is going to be going and walking the page table which could be very slow so where does a block get placed in the cache okay this is going to show you those three options here pretty clearly so suppose we have a 32 block uh address space here um and this particular entry is uh one that's going to map a bunch of different addresses so if it's direct mapped then block 12 basically can go only in one place uh this is the address space here's the cache i've got eight entries there's only one place for block 12 to go that's 12 mod 8 is exactly there if i have two-way set associative there's four sets so there's two places that block 12 can go and in a fully asses associative cache there's eight places that block 12 can go okay so this is another way to look if i have the same size cache physically the the associativity this is a direct mapped two-way set associated fully associative the associativity says something about what i have flexibility to put an item from the memory space up top here into the cache okay good hopefully this is all similar to everybody so what do you replace on a miss so if you're gonna um load something new into the cache and you've got to kick something else out of the cache well with direct map there's only one place you can load right so in this case if i now go to something else like uh address 20 and i try to load it here in the direct map cache there's only one choice of which one to kick out it'll be this one so that's easy but if i get to two-way set associative now i gotta pick one i could pick either of the them or in fully associative i have to pick one okay and that's called the replacement policy and so direct map there's only one chance for said associative or fully associative there's lots of options excuse me random least recently used et cetera so um what you can see is that for a cache oftentimes the difference between random and lru is very little especially when you get a larger cache and so random works pretty well for cash is when you get a lot of us when you have a higher associativity and bigger cashes and so the cost of remaining of keeping track of what the lru is is often not worth it in a cache that's not going to be the case when we get to page tables in a moment in paging okay the other question is what do you do on a right so we have two options one is right through and one is right back so in the case of right through when i go and i'm writing data from the cpu i write it into the cache and into the dram so it's writing through the cache to the dram okay that is good because it makes sure the cache has always got the most up-to-date data it's bad because it's slower so the speed of writing through to the memory is uh is going to be as slow as the dram than is then as slow as the cache right or as fast as the cache the case of write back i actually just put the data in the cache and i keep tracking the fact that it's dirty or written and then i have to make sure so that's very fast but now i have to make sure that when i kick it out of the cache because i'm replacing with something else i'm writing it back to dram or i'm going to lose my data okay so pros i've write through is read misses can't result in writes because the data is always up to date in the cache and you don't have to save it cons is the processor rights are slower right back the pros are repeated rights are not said to dram and rights are fast the cons is a little more complex to handle and so the difference between right through and right back they're used in different places so oftentimes for instance right through might show up at the first level cash but right back is the second and third level cashes um because that cost they're going all the way through can be very expensive okay good now there was an interesting issue that came up on piazza after last lecture and so i figured i made a new slide uh to represent this just to show you i i'm going to tell you something and i at the risk of confusing people i don't want to do that but there is a difference between what's called a physically indexed cash and a virtually indexed cash and so here's what we've been talking about this is a physically indexed cash so what does that mean that means what comes out of the cpu is a virtual address it goes to the tlb and assuming the tlb matches we combine the offset oops with the physical uh page frame and we go directly to the cache and we've basically got we look up in the cache and it's physically indexed so what that means is the addresses we hand to the cache are physical if the tlb misses well we go to the page table just like i showed you that multi-level page table at the beginning of the lecture we go to the page table and what happens there is um it walks the page table but those all of the addresses in the page table including the top level uh is cr3 is basically physical addresses and so we can uh when we look in the page table we're just going through the cache just because we we do it just works that way right because everything's physical and the cache memory mechanism of caching dram in in the cache that's all handled by hardware and so nothing has to worry about that so this is a very simple organization and by the way it's the one that the x86 uses and it's the one that we talk about in class most okay the other big advantage to this which i'll you'll see what i mean in a moment is that every piece of physical data which means something that has a location in memory has only one location in the cache okay that's a physically indexed because the cache really is just a portal onto the memory there's nothing special there okay and so when we contact switch we don't have to do anything special with the cache we might have to do something with the tlb which we'll talk about in a moment but we don't have to mess with the cache okay challenge to this as you can see here though is that assuming that we've made our cache really fast maybe we have three levels of cache and it's you know carefully tuned into the processor pipeline and all that sort of stuff to make it fast the problem is that we have to take the cpu virtual address pipe it through a tlb before we can even look at the cache and so this tlb needs to be really fast okay the other option which came up in piazza was this idea of a virtual cache okay the this is uh more common in uh higher end servers that aren't made out x86 processors i would say this is less common these days um but you take the cpu and the first thing you do is you just look up in the cache okay and that looking up in the cache since the cache has virtual addresses to the index is just fast you just put the virtual address in there you either get it or you don't if you miss then you got to do something when you miss then you got to look up on the tlb and notice that i intentionally put the cache and the tlb kind of uh on top of each other because you can actually be looking up in the tlb at the same time you're checking in the cache so you can overlap that and then we look up in the tlb and assuming that the tlb hits then we can just go to memory find the physical data and put it back in the cache these uh these arrows are all for addresses which i don't have a reason i don't have a data arrow coming back and then we're good to go if we miss in the tlb then we got to do something and if we want the same advantage of caching the page tables then we can have the page tables made of virtual addresses and then we do a recursive walk of the page table by looking up virtual addresses which may in turn cause tlb misses uh which cause page table walks and the key there is you got to be careful so that ultimately the root tlb pages are pinned in a way that this uh finishes okay and and uh so that's a little more complexity there okay so the challenges of the virtually indexed casts well first of all the benefit is this can be blazingly fast because there's no tlb lookup between the cpu and the cache the challenge though is that if you think through this a little bit you'll see that the same data in memory can be in multiple places in the cache because remember every process has its own notion of zero for instance i've said that several times this term and therefore if two processes are mapping to the same memory then they'll actually they'll be in different virtual address spaces and that could be messy okay and in fact what you need to do with this cache layout is when you switch from one process to another you actually have to uh flush the cache okay and that that instance where you might and that instance where you might have the same data in multiple parts of the cache is when you actually tag the cache with process ids we won't go into that here but now you've got aliases where the same data can be simultaneously in different parts of the cache and that can lead to all sorts of consistency problems all right so i just wanted to make sure everybody saw these two if uh if that was too much information we're going to stick with the top one i might who knows i might actually ask you a question about this lower one but for the rest of the lectures we're going to be mostly talking about this physically indexed one up here it's uh it's popular because of its simplicity but we need to figure out how to make the tlb fast okay so why is the page table virtually addressed here well just because if we want to cache the page table which we do we because remember we're pa we're walking through a bunch of memory we don't want to go as slow as dram so we want to be in the cache but to be in the cache that means a page table has to have virtual addresses in it rather than physical addresses like the ones we've been talking about okay and that because that's the only way that we can come back through the main cache okay now we could pull some other tricks where we have a separate cache just for the page table or we have a separate pipeline that tries to make accessing dram faster than otherwise but the simplest idea here is to make this virtually addressed okay and it's actually not so crazy and it's done by definitely some server machines the trickiest part about this is not what we just said there the trickiest part here is the aliasing uh of the cache and that's kind of it gets messy quickly okay so the top cache doesn't have to be invalidated on a process switch because this cache up here is purely just a portal into the underlying memory and so one one location one memory location one location the cache one location the memory um there's never multiple cache locations that go to memory all right that's another reason this is a simpler organization we do need to do something about the tlb though we'll have to come up with that in a moment so adventist trivia as we discussed uh in the chat before um before lecture a little bit uh it does seem like midterms keep coming and i'm sure you're getting them in all your other classes too but yep we're coming up on midterm two uh on thursday at 10 29 and topics are basically everything up to lecture 17. i've listed a bunch of them here if you know basically it's just everything up to lecture 17 there has been a discussion um in piazza about whether these exams are cumulative or not the answer is we focus on the new material but don't forget everything you learn the first third of the class because you never know we might um you know ask you something that meant you had to remember how to synchronize or something like that so don't forget everything you learned but most of the material will be focused on these new lectures the other thing is we're going to require your zoom proctoring to be set up and so um i think what we're going to be doing is generating your zoom rooms for you but make sure you got your camera and your microphones and your audio all set up in advance because we're actually going to be uh requiring that um during the exam all right um there's a review session on the 27th tuesday and um there is a zoom room just like the last one uh neil will put it out he's got all the information for it i don't know that we have yet but we will soon um i you know i guess it was a silent announcement a while back but i do actually have office hours these days from two to three monday to wednesday there's a there's a zoom link that's posted both on the course schedule and i think i have a piazza pin piazza statement about that but um definitely feel free to come by and chat about computer architecture or life the universe and everything or quantum computing or whatever you like probably don't want to come with questions about detailed code aspects of your projects you should stick with your ta office hours on that because this is more for general discussions and interesting questions and and if you do want to come with whatever questions you have and if you want to set up something private with me as well we can do that all right and then i put the best for last the election's coming up please don't forget to vote if you have the ability to vote uh this is the one of the most important things you can have as a u.s citizen and uh take advantage of it you know what whatever you vote is is fine um but uh today is actually the last uh day to register if you haven't done that uh so you do need to do that thanks for pointing that out and um be safe when you uh when you go if you go in person or fill out your forms and and mail them just don't put them in a fake ballot box make sure that somebody like the post office actually gets it and what's cool in california is you can sign up uh and you'll get texts all as your vote works its way through the system which is also pretty cool you know post office since we received it and then the state administration says that you know we've got it it's ready to be counted and so on so you get to find out about that okay all righty please vote so let's go back to some questions now and i'm going to be talking physically index caches again so here's our here's our schematic of what that looks like and uh we got cpu going to tlb going to cash going to memory and the question is kind of what tlb organization makes sense here um clearly the tlb needs to be really fast because it's in the critical path between the cpu and the first level cache okay so this is uh this needs to be really fast this seems to argue for direct mapped or really low associativity in order to make that fast now you have to have very few conflicts though because every time you miss in the tlb you have to walk the page table which even if the page table's cached could be you know four or eight memory references just to do a single reference so you don't want to miss in the tlb when you can which means you want as few conflicts which pushes your associativity up and so there is this trade-off between the uh the cost of a conflict is a high missed time um but the hit time is is slow if it's too too associative and so there's a lot of tricks that are played to make the tlb fast and this is kind of a 152 topic but i thought i would say a little bit about this and the other thing is we got to be careful kind of in the tlb you know what do we use as an index in the tlb you know if we use the lower order bits of a page as an index in a lower associativity tlb then you can end up with some thrashing um and that could be a problem if you use the high order bits you'll end up with a situation where big parts of the cache are never used so uh you know you got to be a little bit careful about this the tlb is mostly a fully associative cache although these days i'll show you in a second these days for instance the x86 has something like a 12-way set associative first level and then it's backed by a large second level so um so how big does it actually have to be so it's usually pretty small and that's basically for performance reasons uh 128 512 entries are pretty standard one of the reasons that there are more entries these days than there were say 10 or 15 years ago is that people use a lot of address spaces including micro kernels which tend to have a lot of address spaces and so there's a lot of tlb entries you might need the other problem is as your dram and your overall memory get really large then there's going to be a lot of pages involved and so you need more tlb entries so that combination of a lot of different address spaces and a lot of memory kind of pushed the tlb up and uh that's why in fact the modern systems tend to have a two level tlb which is a slightly smaller one at the top level and a much bigger second level one to try to make things as fast as possible small tlbs often organized even as a fully associative cache this is often called a tlb slice where you have a little tiny tlb that's direct mapped at the top level and then you have a second level that's a little bit uh much much bigger if fully associative is too slow then you do this two-way set associative called a slice here's an example of what might be in the tlb it's a it's a fully associative lookup for instance for the mips are 3000 that's very old processor but it's easy to look at here you might have a virtual address you know a physical address and then some bits for lookup and the trick is the virtual address comes in you associate fully associatively look it up to find that tag and then you get the rest of this represents the match and the tlb the thing i wanted to show you about the r3000 is the r3000 handled the question of how to deal with a fast tlb in an interesting way that was kind of possible back in the old days which is basically you need a tlb both for the instructions and for the data and what they did was they arranged the tlb lookup was a half of a cycle and so although you in 61c learned about a five stage pipeline with you know fetch to code execute memory and write back here are the actual cycles up top and if you notice what really happens is the first half of the instruction fetch cycle is actually a tlb lookup then there's a whole cycle that overlaps the last half of instruction fetch in the first half of the code for the icash lookup okay in the case of the data tlb what happens is the address is computed in the first half of the execution cycle and then the deal the tlb lookups in the second half okay and so they were able to confine the speed of the tlb to half a cycle and be able to deal with that by rearranging things in the pipeline a little bit but in general that's uh much harder to do these days and there are many more pipeline stages as i'm sure you learned so the thing to ask yourself is if we're going to go with a physically mapped cache and we're going to um not be able to split cycles that way then what are we going to do and really as we've described this we're taking the um offset and copying it down of course but then we're taking the virtual page number looking it up in the tlb and then copying the physical page number into the final address and the question is how do we make this faster in general and the answer is well one trick is take a look at this i'm showing you the virtual address and the physical address okay and that physical address i'm showing you is actually split up into a combination of the tag and the index and the byte remember we just showed you that and the virtual address is the virtual page number in the offset and if you can arrange that the offset overlaps the index and the byte in the cache then you're golden because this index this offset doesn't get translated by the tlb so you can pipe this offset into the cache and immediately start looking up the index while you're looking up in the tlb so you're overlapping the cash access and the tlb even though logically you have to do the tlb uh lookup before you do the cash access okay so this is the example of the tricks here's a picture of that so in fact when you take um this is with two byte uh with uh four byte pages but if you uh take a look excuse me four byte cache lines 4k cache uh 4k pages what you see here is we take the page number we start looking it up in the tlb and meanwhile we uh use the 10 bit index to look up in the cache thereby giving us a two a four byte cache line okay and you can rearrange this any way you like but so we can basically then get out of that cache we can get our tag and out of the tlb we get the tag which is now the physical page number we can compare the two and we can look at this kind of after we've done both cache access and tlb so there's there's how that parallel thing works isn't that cool uh galaxy brained as it says on the chat yes now if things are 8k in this organization this is a little tricky um but there's all sorts of tricks that people do in fact they might divide the 8k cash into two banks so if you lower the association or if you raise the associativity the two-way is set associative this still works right there are other things you can do you can pull tricks where you look up uh part of the cache access and you finish the rest of it later and there's all sorts of stuff okay now uh the other option of course is if you really want a really big cache and you're running into this problem then you might actually go back to your virtual cache in fact intel has managed to do this very well with all sorts of tricks in how to make the tlb work fast okay good so here's the actual um previous generation processor that's pretty cool if you look at the front end instruction fetch and the back end data what you'll see here is here's the data tlb here's the instruction tlb and these are first level caches as tlbs that are basically backed by the second level tlb cache so when you miss in the first level tlb you first look in the second level and it's only when you miss in the second level which is by the way much bigger that then you do your table walk okay and so this is a way to make that faster and so that particular processor for instance here's an example where the l1 icash is 32 kilobytes the data the l1 data cache this is level one is 32 kilobytes second level is combined megabyte uh third level cache is actually 1.37 megabytes per core and oftentimes you get like 50 58 cores or whatever you can get a lot of cores in these particular chips and so the tlb just to finish out what we've got the level one instruction tlb is 128 entries eight way set associative the level uh one data tlb is 64 entries four-way set associative and all of this is backed by the second level shared uh stlb which is uh 15 136 entries 12 ways associative and so you can see how they pull all sorts of tricks to meet their pipeline timings and thereby basically keep a direct mapped excuse me thereby keep a um a physically page indexed cache which is much simpler than dealing with the virtual one okay a core in this way in this case by the way is a combination of a processor and a slice of the third level cache and some cache consistency hardware and some bit of networking that's often called a core and then the processor core is the small piece of that and then together those are all put together on a chip to get um multiple processors okay so you can either think of a core as a processor or a processor plus some extra stuff that's associated with that processor okay and this by the way is the processor portion of a core there's a whole bunch of other stuff as well so what happens on a context switch in general you got to do something so assuming for a moment that we have physically addressed caches we don't have to mess with the cache so that's cool but the tlb still has to do something and the reason for that is really that we just changed the address space from one process to another and so all those tlb entries are no longer valid okay because gee that for process a it mapped zero to um a particular part of the address space and um you know you switch to b and now zero gets uh mapped to a different physical part of the physical space and so we gotta do something we have a couple of options one is you can flush out all of the tlb uh with special flush instructions as soon as you contact switch a lot of early processors and by early i mean as early as seven years ago you had to do this so every contact switch you actually had to flush the tlb out and you can see now why a process switch from one process to another might be actually expensive because you're flushing a bunch of stuff now more modern processors which intel has made much more common these days actually have a process id in the tlb and so when you switch from one process to another and you switch an id register in the processor then it knows automatically to ignore the old entries of the tlb from the other process and put new ones in there and by the way when you switch back to process a from process b it's quite possible that a lot of the tlb entries are still there so this is a much better sharing of tlb amongst multiple processors and it has the advantage you don't have to flush the tlb out when you go from a contact switch okay now if the translation tables themselves change which basically means the uh you know page table changes then you gotta do something okay and uh in that case you really gotta invalidate the tlb entries and i'll show you more about that in a moment and that's because the tlb is a cache on the page table and if the page table gets changed in software you've got to do something about the tlb which is still in hardware and this is typically called tlb consistency all right and of course with a virtually indexed cache you also have to flush the cache or you have to have process ids in the cache okay so that's also potentially tricky all right now i'm going to show you this in a second so um let's look at this particular example i'm going to show you here and uh there's question in the chat about the difference between the uh tlb and the page table and hopefully this will help a little bit so the tlb is a cache now on the page table but if you look here let's put everything together so our virtual address uh has and this is going to be that magic 1010 12 layout so our virtual address has this piece in red is the virtual page number it's going to have two pieces to it the offset is of course going to get copied to an offset in the physical address so this is the easiest part of translating from a virtual to a physical address so uh always remember that okay that's a way to get yourself some points right so um now let's look at what happens here so physical memory is over uh in the the mint green on the right there so we have a page table pointer pointing at the first level page table we take the index um that lets us get a page table entry that page table entry is going to include a physical pointer to a next level page table and so then we're going to get use that to look up the physical page number okay and now that's our physical address so what are we going to do with that physical address well the physical page number is actually pointing at a page which is for instance if this is 12 bits uh this is going to be four kilobytes in size and then um the offset will pick a chunk inside the page and that'll be the thing that we're accessing so i hope that everybody kind of sees the analogy here right off the bat between paging and caching right because remember in the case of a cache uh we looked up the cache line this light blue thing and then we had the offset to look up the dark blue thing this is almost identical idea except that these offsets are bigger than a typical cache line okay and we're going to bring the cache into this in a moment just everything in one figure right so now this was all fine and dandy okay except that we had to walk our way through this page table which is uh expensive so now this in uh the question that was in the chat is what's the difference between the tlb and the page table this is the page table and to access the page table i have to do this lookup this is a tree of tables and i have to do multi-level lookup to get this physical page number which we do not want to do okay um oh yeah and so the question here here is how much uh of this offset will we use that depends on the loader story you're doing so it could be it could be 32 bits it could be 64 bits it could be a byte uh you know it could be any number of things that's going to depend on um it's going to depend on what we're doing now we're going to get to the cache in a moment in which case we might pull a chunk of this into the cache uh in that case it's going to be a cache line chunk okay all right so let's hold off for a second i'm worrying about buffering so this is the page table but this is too slow because just to do a load we had to go through multiple uh lookups in a page table and this can be four to eight levels so that's not good so i'm graying this out so now we want to get from this virtual address to this physical address quickly how do we do that we put a tlb so what we're going to do is we're going to remember this translation okay down here in the tlb where we actually take the virtual address this red thing and we put that as the as the tag and say a fully associative cache and this yellow thing is going to be the physical page and now uh we've just sort short-circuited the multiple memory accesses by just taking the virtual address quickly looking it up in a think of this as a hash table if you want a fully associative lookup that gave us our physical page and we're quick okay now so hopefully this shows you the tlb is a cache on this more complicated page table lookup but this doesn't show us the actual cache itself which is the data cache so this is the tlb which is a cache the real cache though the data cache is this one and remember this we talked about at the beginning of the lecture so here how are we going to look up the data in the cache itself well this is the physical address that's been translated but that physical address can be divided up into tag index byte everybody remembers that from the beginning of the lecture right this is 61c and so that index would be used to select a cache line the tag will be checked and assuming that matches the byte index will then decide which part of that cache line we want okay and so in that instance in a prior access we've taken a cache line block out of physical memory here put it into the cache and now we go ahead and do the actual access uh out of the cache all right so notice there are two caches here it's their tlb and the regular data cache so i want to pause just long enough to make sure everybody has absorbed that so the dark blue on the right here actually corresponds to i guess you could say it corresponds to either this dark blue piece here if i'm looking at only a little tiny bit of this physical memory or you could say maybe it corresponds to this whole cache line but actually if you look at this offset as being down to the bite and there's a particular bite here then this could this bite the dark blue piece could be the same as this dark blue piece okay so during a contact switch where do we flush the tlb2 um outer space it goes into the bit buckets and little ones and zeros go draining out the the uh the waste bucket in the back of your computer now uh basically when you flush the tlb you just throw things out here because uh if you look if you think about this thing in the tlb let me back this up a little bit here if you look at this this cache is is in some sense a read-only cache on top of the page table so you can throw out the whole tlb at any time and you can always be correct because there's no up-to-date information the tlb except except when we start talking about the clock algorithm there are things like the use bit and the dirty bit that do have to be maintained to some extent okay all right now where are we storing the tag plus index part this isn't stored what this is is this is just a reinterpretation i take these 32 bits and i mentally divide them up into these three pieces so i can do my cache access all right are we good okay so i thought i'd throw that all together so you saw it all in one place good now let's move up a level so we've been talking about the page table translating from virtual to physical but we haven't talked about what happens when there isn't a translation okay what does that mean that means that one of there isn't actually an entry in the page table for every possible mapping and therefore some of these are marked invalid which means that we're going to get a fault we're going to try to do this access we're going to get as far as looking you know the last level page table or maybe the top level page table we'll encounter an invalid bit and at that point the hardware is like game over i can't do anything about this okay and that's called a page fault all right and um what happens there well the page table entry is marked invalid um okay or in the case of intel not present right and at this point that's a problem another problem could be we try to do a write but the page is marked as read-only some other access violation or doesn't exist at all these are all possibilities and what's going to happen there is we're going to cause a fault or a trap it's a synchronous fault which is not it's not an interrupt interrupts are asynchronous this is a synchronous fault because a memory access failed and at that point we have to do something to move forward okay and in that case other page faults actually engage the operating system uh to fix the situation and try to retry the instruction now good question what's the difference between a page fault and a segmentation fault so typically a page fault occurs when you try to access a page in the page table and it's not currently allowed okay and what happens there is the operating system gets a page fault and it can now do something just because you get a page fault doesn't mean that you can't do something you might pull something off of disk you might do a copy on write operation you might do any number of operations so a page fault isn't necessarily fatal so that's one thing oftentimes a segmentation fault is thought of as fatal so when your program dies with a segmentation fault it's historical it's called a segmentation fault because typically you're working outside of the segment well it is exactly like memory segmentation but when you try to go outside the segment if you remember back to a few lectures ago that's can't go on and that's a segmentation fault okay now in modern systems uh you could get either a segmentation fault in the kernel or you could get a page fault depending on what the what the current situation is you're gonna get the segmentation fault first because the that's the first thing that's checked are you is your address within the segment and then from there you check the pages okay all right so segmentation faults um either generically talk about game over uh to kill your program or do talk about a fault that comes because you've violated some about the segments all right so let's not dwell on that too much um we'll mention it again another uh possibility here this is a fundamental inversion of what we're talking about here the hardware software boundary because the hardware is trying to move forward and it can't move forward until software runs okay so that's a little different than we normally think right normally software has to use hardware to run here the hardware stops and says i can't do anything and the software takes over the thing that's important is when you get a page fault that's the hardware saying i can't i can't do this you can't then in the handler cause other page faults or at least you can't do it recursively in a way that will never resolve because in that case you go to an infinite loop and that's bad okay so we have to be a little bit careful about page faults especially in fault handlers so let's let's look a little bit further on an idea that we might do with this page fault idea okay so um the demand paging idea that i'm going to talk about next harkens back to the cash so here's a figure i've shown you many times over the course of this last several lectures at least which is um the idea that we have many levels of caches in the system okay modern programs require lots of physical memory uh memory systems have been growing at a 25 30 per year for a long time but they don't always use all their memory all the time this is the so-called 9010 rule which means that programs spend 90 of their time with 10 percent of their code or 90 of their time 10 of their data not always a perfect statistic but it's a way to think of things and it'll be wasteful to basically require you to have all of that huge amount of storage that you're not using in instructions say or in libraries that you've just linked but not used in the fastest memory and so this memory hierarchy is about caching it's about making the speed uh of the system look like the smallest fastest item but have the size of the largest item now the largest one i show here is tape i know that's probably way before your time but instead of just disk we could have ssd and then spinning storage and then there's still people still use tape in some very rare instances but um so the idea here would be can we use our paging tape is much larger than disk potentially um although discs have been getting awfully big and so pretty much tape is is a legacy thing now but the trick we're going to do here is we're going to use main memory which is smaller than disc in almost all cases as a cache on the disk and so we're going to think of the image of the process as large on disk and we're going to pull in only the parts of the process we need into physical dram and what we're going to do is we're going to try to make it look like even though our image is huge we're going to get the speed of the small thing with the size of the big thing let's say disk for instance and we're going to do that by using page faults cleverly we're going to basically say we're going to start with all of the pages invalid or small number of pages invalid for a process and then as we start executing we're going to get a page fault for a page that's not currently in memory and then we're going to pull it off of disk into memory and then mark the page table entry as valid and we're going to keep going and if we do this correctly we'll eventually get the working set and i'll show you what that means in a moment of the process into memory uh and so that only those things that are actively being used have to take up space in dram okay we call that uh demand paging and so by the way um caching is uh called different things depending on who does it so when you hear caching typically and especially in 61c you were talking about taking the dram and using the second third you know first whatever levels of cache as a higher speed version of the dram and all of that's done in hardware we are now going to do the same idea where we have disc as our backing store and we're going to use the the operating system to pull in the parts of the disk that we need and put them into dram and mark the page tables appropriately so we can get the same idea of caching but it's going to be done in software rather than this other typically called caching that's done in hardware okay so let me just show you an idea here so here we're executing an instruction you know we get a virtual address we go to the mmu uh in the good case we look up the page in the page table and hopefully this is in the tlb so it's fast although that's not relevant for this current discussion this all works we basically go to memory and we access that instruction so that's good because we've basically put the uh the current instruction we want in our dram now however it's possible we go to this and it's not there all right and in that instance what would happen is we um get a uh page full because we try to look it up in the page table and it's not working you know basically come back invalid which case we get a page fault and we enter the operating system with an exception okay and in that case we have to deal with the page fault handler what's the page fault handler do well you're all very familiar now with what happens when we do a system call into the operating system same idea with a page full handler it's going to be running on the kernel stack it's going to identify where on disk that page is and it's going to start loading the page from disk into dram and it's going to update the page table entry to point to that new chunk and then later we're going to run a scheduler put this back on the ready queue which will then try the instruction again which in this case will work and we win okay so this is demand paging all right and notice by the way that um i sped this up a little bit obviously when we've started when we start the million instruction load off of disk into memory we have to put that uh process on some weight queue and then when the data is actually in memory we wake it up from the weight queue and that's the point at which we can fix the page table and then put the process back on the ready queue so that it can run okay all right so this is called demand paging questions now let's look a little further along here so demand paging is a type of cache and so we can start asking our questions and by the way why is this a cache look when we missed originally it wasn't in the cache that was a cache miss and then we put it into the cache and now we have a cache hit right so this is just like a cache it's just being done in software and we're pulling things in not in cache line size things but in pages off the disk and so that's our first question what's the block size one page which is now four kilobytes not 32 bytes or 128 bytes okay what organization do we have here well this is interesting i hope you guys can all see this is a fully associative cache because we have an arbitrary mapping from virtual to physical because of the page table so the um the page table gives us a fully associative cache because we can basically put that page pretty much anywhere we want in the memory okay how do we locate a peg well a page we first check the tlb then we do a page traversal and then hopefully we found it and if we still fail after all of that then we might do something more drastic like kill off the process so earlier we were talking about in the cache we could think about randomly replacing or we could think about lru the question is what's the replacement policy and this is actually going to require a much more interesting longer term discussion which we're going to start tonight and then we're going to move our way next time in more detail but um you know is it lru is it random it turns out that the cost of a page fault that has to go to disk is high right a million instructions so we got to be really careful when we uh have our dram is full of other pages we have to be very careful of which one we choose to replace and so the replacement policy we can't just say yeah random works pretty well or lru works pretty well we have to do something else and it turns out we're going to want something like lru but we're not going to be able to do that well and we're going to show you that in a little bit okay maybe next time so what happens on a miss well on a miss you go to the lower level to fill the miss which is you go to disk okay what happens on a right well here's a good example earlier i talked about right through versus right back and maybe at the top level cache you do right through to the second level cache here you absolutely cannot do right through okay why can't we do right through for uh paging anybody guess yep that means that a write from the processor which is supposed to be really fast is going to take a million instructions worth of time okay so absolutely not no right through to disk what we're going to do instead is write back and that means we're going to have to keep track possibly in the page table entry which pages have been written to so that we know which ones are dirty and have to be written back to disk they can't just be thrown out they actually represent up-to-date data okay right so now we want to provide the illusion of essentially infinite memory so in a 32-bit machine that would be 4 gigabytes and a 64-bit machine that'd be exabytes worth of storage okay and we want to do that by using the tlb and the page table with a smaller physical memory so i'm showing you here four gigabyte virtual memory and a smaller 512 megabyte physical memory that represents uh only the data that's in memory and has to be shared amongst a bunch of different processes okay and so the page table is going to be our way of giving that illusion of an infinite amount or a maximum address based filled amount of memory okay and the way that works is in the t the tlb mapping through the page table is going to say that while certain items are in physical memory others are not and they're on disk and it's going to be up to this page table to help us with that mapping okay now the base the simplest thing it's going to need to do is uh do a really quick mapping for those things that are actually in dram we talked about that earlier for the things that aren't on dram then there's more interesting flexibility here one is that maybe you put something about which disk block it is into the page table which are mostly not page table entry which you're mostly not using the other would be maybe a special data structure in the operating system these are all options for locating on disk once you've once you've missed in the tlb page table combination okay so disk is larger than physical memory means that the virtual memory in use can be much bigger than physical memory and combined memory of running processes much larger than physical memory more programs fit into memory more concurrency okay and so the principle here is a transparent level of indirection of the page table supporting flexible placement of where we want to put things uh the actual physical data and which things we want to have on disk or not okay and that's going to be what we're going to try to figure out how to manage now so i've up till now we've talked about all the mechanism tlps page tables etc to make this work and now we need the next level which is what does the operating system do to manage all those pages well so remember the pte i showed you this one earlier this is an example in our our magical 10 10 12 example here's a 32-bit page table entry the top 20 bits are the physical page frame number either of the page itself or of the next level the page table and if you look down in here you see that the valid or present bit is down at the bottom we have whether it's a writable page whether it's a kernel or user page whether it's cachable or not and as we move our way up we have some interesting things like the dbit which is is it dirty or not and the d bit is going to get set in the page table entry when we modify the the page okay and in that case when we're about ready to replace the page we'll have the d bit to tell us something about uh do we need to write it back to disk or not the a bit or the access bit is another one that we're going to use for the clock algorithm we'll also talk about that in a bit okay so submit mechanisms for demand paging so the page table entry uh makes us do all sorts of or gives us the options to do demand paging so valid means the page is in memory the page table points at the physical page not valid or not present page is not in memory you're welcome to use the remaining 31 bits at least in the intel spec for anything you want and one thing you could use it for so notice that if this guy is zero down at the bottom these remaining 31 bits could be used by the operating system for instance to identify a disk block if you wanted or something else in internal data structures that you happen to have in memory in the kernel if the user references a page with an invalid page table entry memory management unit is going to trap to the operating system giving you a page fault what does the os do any number of things chooses an old page to replace for instance if the page is modified uh if so d is one it's got to write the contents back to the disk uh it's going to change the page table entry and the cash tlb to be invalid so notice that what we're doing here is we're picking a page to kick out maybe writing it back to disk at that point that page is going to go from valid to invalid so we're going to have to go and modify the page table entry in the page table to be invalid and since the tlb is a cache on that we're going to have to invalidate that tlb entry otherwise we're going to end up with you know the tlb giving us the wrong answer it's going to claim that a page is valid when it's been overwritten and then we're going to load the new page into memory from the disk so we uh we caused this page full because we're trying to access something that wasn't there we pulled it in off of disk put it into memory uh overriding the one we got rid of we update the page table entry for uh that new page to be valid and point at the physical page we took we invalidate the tp the tlb for the new entry why well because that tlb was invalid when we got the page table uh page fault and then finally we continued the thread from the original faulting location and we're good to go and so all of these things here are basically what makes the thing we're talking about here a cache okay this is how the combination of tlb and page table entries can be turned into a demand page caching mechanism when the thread goes to execute again after it was pulled off the ready queue the tlb will get reloaded that time because since we invalidated up here the first thing that happens is you get a fault in the tlb you rock the page table you get the new one okay so now the good question in the chat here is so in the program that was referencing that old page starts running again what happens it causes a page fault and it picks a different page to replace and pulls it in off the disk okay so the crucial thing here is to not have so much memory and use that the only thing you're ever doing is pulling pages off of disk that's called thrashing if you do that and in that case there's no actual work that happens uh only paging okay and that's the worst possible scenario assuming that we're not at that point all that we've done by pushing out a good page and that's where replacement comes into play is we're we're readjusting the working set of the running processes to be the things that are actually in dram so that we get really fast access for all those pages that are actually being used and the hope is that very rarely do we kick a page out of memory that's in active use by a process okay so hopefully that case that is being worried about in the chat here is not uh too frequent otherwise we're in trouble okay and of course when pulling pages off of disk and so on uh we want to run something else because we've got a million instructions to wait all right now so where does the sleep happen so the sleep is going to happen uh on the disc uh weight cue okay so the process that's basically trying to page in off of disk is gonna its uh tcb is for that uh thread is gonna be placed on the weight queue for that disk and it will get woken up when the block comes back from the disk and when does the the sleep happens after you uh start everything in motion and get the access starting on the disc and in that point uh when now it's all up to the disc it's at the point at which you put that thread on the weight cue all right and then it'll get pulled off the weight cue when uh when the thing comes back so now the origin origins of paging here are pretty clear um you had this is back in the really really old days where you had you know really expensive piece of equipment many clients running on dumb terminals um a small amount of memory for many processes and discs have most of the storage so in that scenario what we want to do is keep most of the address space on disk and um try to keep the memory full of only those things that are actually needed because memory is incredibly precious okay so a lot of the original paging mechanisms came up in that environment okay and you're actively swapping pages to and from uh the disk today we're very different right we have huge memories we have machines that rather than being owned by one uh by owned by one organization and used by many are typically owned by one user and basically working many processes on behalf of that user and so when we were talking about all the different ways of scheduling part of those different ways were reflecting the changing needs of of resource multiplexing and the thing the same thing is true of paging okay if you take a look for instance you do something like here's a psa ux type thing you might get off of a unix style operating system what you'll see here is the memory if you look here at physical memory we've got about um 75 percent of its use 25 is kept for dynamics and a lot of it's shared so there's 1.9 gigabytes shared in and memories distributed memories you can see that up here distributed memory excuse me a lot of it's shared in shared libraries that's what i meant to say that's about 1.9 up here and so really there's a lot that's working on behalf kind of of one user and so we're not so much uh optimizing by trying to get things out to disks that aren't used as quickly as possible we're trying to keep things in and we have a lot more memory to work with and so we want to keep that in mind when we start talking about policies for paging so there's many uses for virtual memory and demand paging we've already talked about several of them you can extend the stack you can allocate a page and zero it you can extend the heap you can do in process fork you can use the page table by setting all of a copy of the page table to read only then only when somebody modifies a page you actually copy it and so fork is a lot cheaper because of the page table um we've talked about exec where you can basically only bring in parts of the binary that are actively in use and you do this on demand okay we haven't talked about map yet but we'll talk about that for explicitly sharing regions to access the file um access shared memory um almost as uh as a file we'll talk about that a little bit and you can also access a file in memory too so let me just show you and if you'll bear with me a little bit we'll finish up kind of here maybe we won't plow into too much new mechanisms after this but so classically you kind of took an executable you put it into memory and uh that was you know you did this for a small number of processes if you look at um what we're gonna do while we're doing that today is we take a huge disc we've got our executable which has got code and data and it's divided up into segments and we loaded it and we want to map it excuse me into a process virtual address space and that's going to happen because we're going to map some of the physical pages off the disk into memory okay and so if you take a look for instance when we start up that process we have a swap space which is potentially set up that represents the um the memory image of this process and notice that that memory image on disk mirrors what we have in our virtual address space okay and the page table points at the things that are actually in memory for say for instance the dark blue process here okay and for all the other pages the operating system is going to have to keep track of where on disk they are so that if the user tries to use other parts of the virtual address space that aren't currently in memory then it needs to know how to pull them in okay and as i mentioned you could use the page table entry partially for that if you like or you could use a hash table in the in the operating system so really here's an example of the page table entry actually those extra 31 bits pointing back to where these are on disk and they could actually store disk block ids or something like that okay now um what data structure maps non-resident pages to disk well it turns out for instance in linux there's a find block which can be used to um basically take the process id and the page number you're looking for and give you back a disc block that can be implemented in lots of different ways as i said you could store it in memory you could use a hash table like an inverted page table just for that data structure and you can map code segments directly to the on disk image so that for instance when i start this thing up i don't even have to load the code into memory what i do is i point the um virtual address space entries for that process directly at disk and then the first as soon as i start page faulting them they get copied into memory from disk and i don't even have to do [Music] anything on starting up practically okay um and i can use this to share code segments with multiple instances of uh a program across different processes and so i want to show you an example here here's an example where the first virtual address space is the dark blue one it's got some code which is going to be shared that's the cyan color and then there's going to be a green process i'll show you in a moment okay here's the green one so it's got its own image on disk and if you notice it's got sort of pointers back to where things are on disk okay and the code here is pointing to the same disk blocks which can actually be part of the original image your linked image that you stored the adot out that you stored on disk both processes can backlink to that okay and um so that that way they both can end up having their code point at the same part of memory that's much more efficient since they're the same process or this gives me the same program that's running in different processes all right here's an amusing thing just notice so we have the active process and page table here might be the blue one and what happens when we try to execute an instruction or look at some data that causes a page fault well we go to the page table entry it's the data's not there we cause a page fault we figure out where the data is and we start the load happening okay and that load is started on the disk controller we'll talk about that in a couple of lectures meanwhile we start the active process we pointed at the other guy and he runs and eventually the data comes in okay it gets mapped into the page table entry we restart this page this uh process and we're good to go okay and if at that point we we allowed the light green one to run for a while and then the dark one uh was put to sleep and then it got woken up when data came back from disk so those that's kind of showing you all of these little pieces we've been talking about all put together um kind of in a big image are there any questions on that hopefully this helps a little bit okay are we good so um the steps in handling a page fault just to show you this in one last way is basically that uh our program's running it causes a reference that looks up in the page table which traps with page fault at that point we figure out where the page is it's on the backing store on the disk we start the process of bringing it into a free frame how did we get a free frame well we replaced it or ultimately we're going to be more sophisticated we're going to have a free list of free frames but we bring the page in after it's brought in we fix the page table entry and we restart okay where's the tlb in all of this well the interesting thing about it is the reason i haven't shown you the tlb in the last several slides is what can anybody give me a good reason why i haven't shown the tlb well you could say this is all with physical addresses that's true but the reason i haven't shown you the tlb is the tlb is just a cache on the page table it's making the page table faster just like i'm not showing you the cache in all this either the cache is just making the dram faster all right so the uh you could think of the tlb is part of the page table that makes everything faster and when we talk about the hardware mechanisms then we have to talk about the tlb but here for this level of abstracting i said we pull up a little bit we don't even have to worry quite yet about the tlb there are some cases that i mentioned in that slide where i showed you why this was like caching where we have to remember to invalidate the tlb to keep that caching illusion alive okay all right so some questions we're going to need to answer and we're going to do that next time during a page fault where's the os get a free frame well might keep a free list there might be a reaper that's busy pushing pages out to disk and keeping that free list maybe as a last resort we're going to evict a dirty page first there's going to be some interesting policies there how do we organize these mechanisms we're going to have to work on the replacement policy that's going to be our major topic next time how many page frames per process that's another question sort of how much of that precious physical dram do we give to um to each process that's going to be a question all right um all right so to finish up we talked about caching we finished that up the principle of locality of temporal and spatial locality is really present in regular caches it's also present in the tlb how do we know when we have enough locality that the tlb works as a cache we talked about three plus one major categories of cache misses compulsory misses conflict capacity coherence we also talked about direct maps set associative and fully associative organizations hopefully hopefully they uh reminded you um basically from 61c we talked about how to organize the translation uh look aside buffer we talked about organizations we talked about the why it's fully associative typically and what to do on a miss and we are next time we're going to start talking about replacement policies and we're going to start with some idealized ones like fifo and min and lru it turns out is also an idealized one for reasons we're going to talk about next time all right i've gone way over so i'm gonna let you guys go um hope you have a great evening and we'll see you on wednesday |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_23_Distributed_Decision_Making_Cont_Networking_and_TCPIP.txt | welcome back everybody uh to cs162 we are going to pick up where we left off on this rainy day in the bay area uh if you recall from last time we were talking about communicating entities using a protocol and um a protocol being a set of well-defined message messages with semantics and one of the things that you often do with a protocol as we mentioned is basically uh produce a replicated state machine on either side of a connection and so for instance i showed you this where we have state machines that might be here at berkeley and in beijing for instance and the idea of the protocol was to make sure that any arc that was taken on one side was also taken on the other and um these state machines could be everything from uh copies of files to uh something much more interesting like what's the state of uh running physical system uh replicated on both sides and um as we mentioned a protocol basically has a syntax which is how the communication is actually specified and structured as well as semantics what do each of the communications mean all right and typically there's stable storage on either side so that if either side crashes you can pick up where you left off and today we're going to talk about one use of that stable storage to allow us to do decision making okay so the other thing we started talking about was this idea of distributed applications and a distributed application is going to be something where the individual pieces are spread all over the network potentially and the question is how are we going to program something like that and for instance you're going to need to synchronize multiple threads on different machines but you don't have shared memory so test and set and all of the synchronization primitives we talked about in the first part of the course are not really available to you and so what abstraction basically is to make use of messages which is pretty much what you've got and sending from one side to the receiving on the other and the nice thing about messages is they're already atomic so you either receive the message or you don't and one of the ways you make sure that you you don't receive a corrupted message of course is you put checksums or something on it but this atomicity the either receive or don't can be turned into all sorts of interesting communication primitives which will lead among other things to the ability to build decision making on top of the network which we'll talk about in a little bit so the interface to this kind of message based communication protocol is there's typically a mailbox with an address of some sort mbox which is a temporary holding area for messages and it has in it both the the destination and the potential cue things are going to put in so you could imagine this like a post office with a lot of post office boxes the m box itself is going to be not only which post office but which box to put it into and then of course their send and receive primitives send says send a message to a certain mailbox and receive basically says um take something out of the mailbox and put it into the buffer and usually that's specified in a blocking sense so that threads sleep until they receive something but of course all of the asynchronous primitives we talked about earlier in the term are available typically as well so um when should send return so when a client does a send of a message there's a real question about when it should return to the client uh should it return only when the receiver gets the message so that might be a case where i've not only know that the message was received at the other end but there's an acknowledgement that could take a long time maybe when the message is safely buffered in the destination um okay that way we took the receiver out of the loop or right away if the message is already buffered on the source node and going out so there's a lot of possibilities here as well and really questions are kind of have two parts to them when can the sender be sure that the receiver actually received the message that's an overriding question but also when can the sender reuse the memory that contains the message and what we'll see in the latter part of the lecture here is this question becomes comes up because if a message gets garbled on the way to the destination we need to retransmit and typically the sender then needs to hold on to the message for long enough that the retransmission can happen so mailbox really provides a one-way communication from t1 to t2 and really there's a buffer that's a combination of various storage areas in the network it's very similar to a producer consumer kind of thing where send is v and receive is p however you can't really tell in this case whether the sender or the receiver is local or not so we can use send and receive for a producer consumer style of communication not surprisingly so the producer might do something like this where while one it prepares a message sends it off and goes in a loop the other consumer will be while receiving uh do something and then process the message and so uh this the only way this is any different from some of the synchronization examples we gave earlier in the term is really that there's a network in between and so the physical separation here could be great but other than that it looks a lot pretty similar to some of the producer consumer code that we wrote earlier there's no need in uh for the producer consumer to keep track of space in the mailbox because it's all handled by send and receive in particular if there's no space for some reason send a block on the way out and of course receive will block if there's nothing in the buffer and so all of that's taken care of for us under the covers this is going to be one of the roles of the tcp window and we're automatically going to track the size of the buffer space at the receiver so that we don't send so much that the receiver overflows um so what about two-way communication obviously two-way communication is pretty standard for everybody that's just two of these things in opposite directions so this is a request response uh you basically set up a mailbox on either side one for the outgoing messages the reception and the other for receiving the in-going messages it's also called client server as we mentioned earlier and so here's an example of a file service where the client basically says something like send read rutabaga into the server's mailbox and the server sends a response back and the client basically goes and does a blocking receive on the client mailbox for the response okay and the server sits here and uh in an infinite loop i'm not showing that here for now but it waits to receive the request decodes it figuring out what it is reads the file into an answer buffer sends it back so this idea again of send and receive primitives in both directions let us construct all sorts of interesting things okay now one thing that's buried in all of this which i'm not going to talk about today i'll talk about it next time is the encoding of uh the send and receive commands you know how do we make sure that the uh the server understands the proper ordering and encoding of numbers from the client et cetera that's going to be an interesting discussion so let's talk about consensus making so the consensus problem really is that all nodes in the system now where a node is a distributed uh a node is some item on the network and can be distributed from other nodes so all nodes propose a value some nodes might crash and stop responding but eventually all of the remaining nodes decide on the same value from the set of proposed values so this is like it's like everybody's going to vote on which value they want and we're going to come up with a result which is going to be the result of that decision making and this has got to work across a network and it's got to work in a way that is resilient when nodes crash so distributed decision making is really choosing simply between two and true and false so the consensus problem i mentioned up here is more general it's about a value but we can do a lot if we just choose between true and false or commit and abort etc and that's typically called distributed decision making and what's going to be very important in all of this is yeah we can make the decision but if we don't record it down then nobody will know in the future what it is that we came up with and so there is a durability aspect to this so how do we make sure the decision cannot be forgotten this is the d of typical acid semantics in a regular database and in a global scale system the question about how to make something durable and long lasting gets into what we talked about last time things like raid uh erasure coding et cetera massive replication or even blockchain which i'll mention briefly in a little bit so let's start with an interesting decision-making uh problem so this is typically called the generals paradox you have two generals they're on separate mountains and they can only communicate via messengers and the messengers are riding horses down one mountain and back up the other one and the messengers unfortunately can be captured and so the question is really how do we coordinate an attack so that if both armies attack at different times they're gonna all die if they attack at the same time they win and so the trick here is simply how do we make sure that everybody decides on the same time okay now i did see a chat of sms here so we'll assume that sms is not available because that's an out-of-band communication mechanism so let's assume that they have to use horse horse message system the hms so this was this general's paradox was originally named after custer who died at little bighorn because he arrived a couple of days too early um so let's let's look at this problem for a moment um can messages over an unreliable network really be used to guarantee that two entities do something simultaneously that's our question so notice the simultaneity here is important remarkably the answer is no okay because even if all the messages get through you have to allow for the fact that they didn't and so you're not quite sure here okay so here's the two sides and uh first guy says oh 11 am okay and he says yep 11 works and then so 11 it is yeah but what if you don't get this ack and back and forth and it turns out there's no way to be sure that the last message gets through uh and so the the lack of reliability of the messaging basically is the paradox here makes it impossible to agree on an actual time such that everybody goes through okay now of course um in real life you could use a radio or something some simultaneous or out of band communication in this particular domain where we don't have any out-of-band communication it turns out you just can't do this for simultaneity okay so clearly we need something other than simultaneous as our requirement okay and what would that be um well two-phase commit is basically an alternative so we can't solve the general's paradox i.e the simultaneous action let's solve a related problem so the related problem is a distributed transaction where two or more machines agree to do something or not do it atomically so there are no constraints on time just that it will eventually happen okay so the constraints on time have been removed because we're just basically saying that eventually something will happen and everybody will agree this atomicity constraint though is an interesting one that i wanted to say something about which it says suppose we have 20 elements in the system all 20 of them will decide to do it or all 20 of them will decide not to do it but you'll never get some of them doing it and some of them not doing it and that's our distributed two-phase commit so two-phase commit was originally developed by uh touring a winner touring a winner jim gray you see here on his boat he was the first berkeley cs phd in 1969 and there's a lot of important database breakthroughs that are also from jim gray he is a an amazing alum of berkeley as well uh and unfortunately uh a number of years ago he disappeared in his sailboat in the bay and nobody ever found him so but um there's a picture of him in happier days and uh he basically developed the protocol called two phase commit and it's sort of the basis for a whole bunch of other protocols so i want to make sure we all know about this so one of the most important things we need to start with is we have to make sure that once an entity in the system or a node makes a decision they won't go back on that decision and so we need to have a persistent stable log on each machine to keep track of whether a commit has happened or not and if a machine crashes when it wakes up the first thing it does is it checks its log to recover the state of the world at the time of the crash okay so the prepare phase of two-phase commit there's gonna be two phases no surprise there is that the global coordinator requests that all participants promise to commit or roll back the transit transaction and participants record their promise in the log and then they acknowledge and if anybody votes to abort the coordinator is going to say abort in its log and tell everybody to abort and the only way that it will actually commit is if all of the participants basically say okay so the commit phase basically is after all the participants respond that they're prepared the coordinator will write commit to its log and then it'll ask all the nodes to commit and uh to act them and after it receives all the act then it can write that it got commit to its log so notice that there's going to be the use of the log at several parts of this to make sure that once we've made a decision we don't do something different later okay and the log is really used to guarantee that all machines either commit or they don't so two phase commit algorithm has one coordinator n workers or replicas a high level algorithm description could be that the coordinator asks the workers if they can commit if they all reply vote commit then the coordinator broadcast global commit otherwise the coordinator broadcast global abort and notice that there's all sorts of things that could go wrong here such that a a worker that we asked whether they want to commit maybe they just go offline and never come back if we time if the coordinator times out and doesn't hear from somebody then it's just going to go ahead and abort so basically what we can do is we can make sure that it's truly out atomic either everybody commits or everybody aborts and there's no halfway and we can deal with all of these different failure conditions which is kind of what we want to do okay and the workers are going to obey the global messages whatever they happen to be and we're using as i said a persistent stable log in each machine to keep track of what you're doing if the machine crashes and when it wakes up it checks its log to recover the state of the world at the time of the crash and then keeps going okay and so the setup is the coordinator initiates the protocol asks every machine to vote two possible votes commit or abort and we commit the transaction only if there's unanimous approval so preparing again that prepare phase the worker either agrees to commit or abort so if it agrees to commit the machine basically is guaranteed it's going to accept the transaction it's recorded in the log so the machine will remember this decision if it fails and restarts so once it's written in the log that it's decided to commit then it could crash and come back up and crash and come back up and as long as it keeps looking in the log it can remember what decision it made and it won't make a different decision and similarly if a machine has said that it will abort it records that in the log so that if it crashes and comes back up it'll never make a different decision now if a worker was offline or crashed they'll come up and they'll notice that it never made any decision at that point it can ask the uh coordinator what to do next or it can just assume abort and send in a board up those are two options but notice that if it actually makes a particular decision it's going to record it in the log so to finish everything up the commit transaction when the coordinator learns that all machines have agreed to commit it records the decision in the log applies the transaction informs the voters the voters to go forward if it aborts it's because at least one machine voted to abort or didn't respond it records the decision to abort in its local log and uh doesn't apply the transaction and it informs all the voter voters that we're gonna abort okay and notice that um because no machine can take back its decision exactly one of these two things happen either we commit or we abort on all machines okay questions now this is a fairly simple primitive but it's very powerful because it says i can take a bunch of nodes and i can make sure they all do the same thing and and from that you can build all sorts of interesting things distributed file systems you can build other types of distributed decision making etc how is the coordinator decided that's a really good question we're going to assume right now that the coordinator has is distinguished somehow because uh they've been um compiled with code that says they're the coordinator in a real system things get much more interesting where there's a voting process to choose the coordinator and real systems basically have a choice of coordinator and then the coordinator goes ahead and coordinates but that's a good question so i hate to mention it but oops here's a question if one machine keeps crashing the whole system will never commit yes that is absolutely correct and yes that's very bad you have correctly analyzed the one of the chief weaknesses of this algorithm so i will say that again later but you've already preempted me on that that's right so this uh particular algorithm is subject to one machine that's uh faulty basically keeping everything from committing that's correct so um there is a midterm last one coming up um five to seven as as uh before materials all the way up to lecture 25 which is uh monday 11 30. um that's the last lecture that is going to be on the midterm is the one after thanksgiving um cameras and zoom screen sharing again just like with midterm two and um there will be a review session we haven't announced it yet i'm not entirely sure when that'll be but it'll probably be the week after thanksgiving on tuesday or something like that lecture 26 is going to be a fun lecture so if there's some topics you'd want to know something about let me know i will pick a set of topics if i don't hear enough suggestions so um you're welcome to email me lecture suggestions all right and i don't have a lot we're actually in the middle of due dates and everything and we're uh don't have anything else to say i did want to report repeat one thing i said last time briefly is pre please be careful of the collaboration policy if you remember as i mentioned explaining a concept to somebody in another group is okay if you explain a concept uh discussing algorithms or testing strategies is okay uh discussing debugging approaches all of these things at a high level is okay searching online for generic algorithms like hash tables okay where this strays into problems is if you're sitting working with somebody and you start discussing back and forth explicit details about the homework that is going to be not okay okay so for instance sharing code or test cases with another group copying or reading another codes groups code or test cases copying or reading online code or test cases from prior years um helping someone in another group to debug their code or helping somebody else doing to do their homework these are all things that are not okay okay and we um compare project submissions against prior year submissions against um internet sources and against your code and so uh you know just just say no to over collaboration don't put a friend in a bad position by asking for help because both of you end up in trouble so all right i just wanted to repeat that we've got a few cases on the fringe of violating collaboration policy so okay now let's uh before we leave the um before we leave two-phase commit i wanted to just give you a little bit more graphic detail here just so you can see so let's look at um the coordinator algorithm the coordinator basically says vote request to all workers the workers wake up after waiting for vote request and then they make a decision if they're ready they send vote commit if they're not they vote abort and they make sure that they record their decisions in on the disk in the log and then the coordinator basically if it receives vote commit from everyone it sends a global commit otherwise it sends a global abort and basically the workers in that second phase if they get a global commit then they commit and if they get a global abort then they abort now notice i'm going to say more about this but the notion of commit and abort is basically a yes or no decision and what you're saying yes or no to could be arbitrarily interesting and complex okay so it could be here's a really long complicated transaction making many changes to a file system that we have previously transmitted to the workers and now all the workers are doing is making a thumbs up or thumbs down decision on well whether to apply that to the file system or not so all we're really doing with the two-phase commit is we're making this this decision of yes or no globally okay so here's an example of a failure free so the coordinator says vote request each of the workers say commit let's say the coordinator says global commit and we're good to go and this doesn't take any excess time so the coordinator you could think of as having a state machine it starts in the init state receives a start from some other part of the software sends the votes it waits in the wait state and then if it receives all vote commit from everybody then it sends a commit otherwise it sends an abort a very simple state machine okay um the workers have a somewhat similar state machine but they sit in the init phase waiting for a vote request and then at that point if they're going to commit they go to the ready state to start the commit process which really means that they are going to tell the coordinator they're ready to commit but now they've got to wait to find out what the decision was on the other hand if they've decided to abort then they tell the coordinator and then they just go to the abort state and they don't really have to wait for any more information because they know what's going to happen that's going to be an abort so uh just to give you a couple of failure modes here that are kind of interesting right so um if a worker fails what happens well the coordinator is sitting in the wait state waiting for the worker and they're gonna have to do something well at that point um you know you're only what happens in weight is you're gonna get a timeout and you're just gonna treat that like an abort and so that's easy okay um so here's an example where the coordinator says vote request um some of the workers say commit but this last one time it doesn't either because the message got lost or because the worker is crashed at which point there's a timeout and the coordinator says well i didn't hear from everybody i'm just going to assume there's an abort and it sends it a board out similarly the workers can deal with coordination failure in a couple of ways so the worker waits for vote request and init the worker could time out and just plain abort and the coordinator will handle that as an abort um it could uh basically send off its response and never find out what the global result is okay and at that point however the worker has to wait because the worker can't just abort because if it's sent a commit it's got to wait to see whether the coordinator is going to abort or not and so really you can't just take a lack of response from the coordinator as an abort because it could be the coordinator crashed and so you have to wait and potentially the coordinator has may have to crash reboot come back up and eventually tell the uh worker what to do because we have to make sure that all the workers do the same thing they're not allowed to make a decision on their own okay all right now um the uh here's an example the coordinator failing like it didn't send vote requests so they all time out and they abort um the here's another example of a coordinator failure where the vote comes in there they vote to commit but the coordinator doesn't receive them it restarts if it hasn't if it knows from its log that it's never sent a global request then it can just their global abort or commit then it can just send a board to everybody so how does the worker know the coordinator receive their commit well they don't so there is there is that question if the coordinator never received their commit then potentially the coordinator will treat that as a as an abort on the part of the on the part of that particular worker now you could put a retry protocol in here to do your best to make sure that the worker hears from you and that's that's possibility but you would need to make sure that you didn't violate the uh atomicity property of this so the interesting thing about how does the uh coordinator make sure that each worker got the global commit that it sends out so what's good about that is if the um if the coordinator uh sends everything out and one of the workers doesn't receive it the worker could time out and ask the coordinator uh what's up at which point the coordinator could tell it so there is that ability there um this this example leads to an abort simply because um we're assuming that this crash happened uh and the coordinator didn't properly receive everything and so it's treating these all as a time out and it's just aborting now what you can do here and everybody's thinking about this this is great is you can figure out how to optimize this in many ways the the key semantics that you have to make sure that are true are the all or nothing basically either everybody commits or everybody aborts but never partially and as long as you maintain those uh proper that property then you can do various optimizations to try to make up for message loss and a few other things like that but and and there are many optimizations including one where if you haven't heard from the coordinator you talk to other workers and they can tell you what the coordinator said because they know the coordinator said uh commit then commit is what the worker should have gotten from the coordinator as well so there is a way to do a gossip protocol among workers that also maintains the semantics but the key thing is you got to maintain the semantics so and to that end durability is very important so all the nodes have stable storage to store the current state stable storage is non-volatile storage backed by the disk that guarantees the atomicity of the rights and uh and make sure that everybody either sticks to their decisions or once they've heard of a decision they keep remembering the decision so they can apply it okay and that stable storage is going to be something like ssd or nvram or disk or whatever and then on recovery like i said there are many you can look at the state machines and you can figure out all the different places to abort after you've recovered based on the information in your log and what state you think you're in okay so what does this two-phase commit tell us about well two-phase commit is uh is a famous very simple first cut at distributed decision making and why is it desirable well it's desirable for fault tolerance you like the fact that a group of machines can basically come to a decision even if one or more of them fail during the process uh the simple failure mode that it relies on is something that's often called fail stop which is that when a node fails it fails by just stopping and not communicating anymore unfortunately if you get into more complex types of failures where a node that's failing uh starts i don't know sending out corrupted messages or or or worse a malicious node starts sending up intentionally uh bad messages that's no longer failstop and uh two-phase commit will not work properly okay the other thing is after the decision's been made it's recorded in a bunch of places so there's a nice replication here that if if a node then subsequently dies you can always ask other nodes what the decision was that was uh that they all came to so why is two-phase commit not subject to the general's paradox remember we kind of said the generals weren't able to make a decision about time and the answer is two-phase commits about the nodes eventually coming to the same decision not necessarily at the same time so if you have a node that crashes comes back up crashes comes back up what will eventually happen when it runs is it will come to that either commit or abort decision and it will apply that properly but it may take a while okay and so we don't care how long it takes what we care about is that it eventually is atomic now the again the question came up here doesn't this assume the nodes will eventually come back up yes so this again this is the simplest decision making and it has that unfortunate property that a permanently crashed node can bring the decision making to a grinding halt okay so just uh keep that in mind we'll talk about other options in a moment so an undesirable face feature of two-phase commit is blocking which was of course just came up in the chat so one machine can be stalled until another site recovers so you can imagine site b writes prepared to commit sends a yes vote to the coordinator and crashes site a crashes b wakes up checks its log realizes voted yes sends a message to site a asking what happened at that point b can't decide to abort because the update may have committed so b is basically blocked until a comes back up and so you have that scenario you can come up with very one various ones of them where um nodes are stuck on other nodes and so that's an unfortunate property of two-phase commit so block site holds resources like locks on updated items pinned pages etc until it learns the fate of the update okay so that's a that's a fundamental problem with two-phase commit what are some alternatives well there's three phase commit so it turns out i'm not going to talk about that in detail today but there's one more phase and it actually allows nodes to fail or block indefinitely and the rest of them can still make progress so that's that's an important uh property you can imagine if you have a system with a lot of faulty nodes or if you have a system distributed across a geographic area where it's quite possible that the networks are going to go down or that some of the nodes are going to fail then you're going to want you're not going to want to use two-phase commit you're going to want to use at least three-phase commit another alternative which is used by google and a bunch of others that's uh doesn't have the two-phase commit blocking problem either is called paxos and paxos was developed by leslie lamport um showed you his picture earlier uh there's no fixed leader in this particular situation so they choose it chooses a new leader on the fly so it can deal with a failed leader that even one that fails in the middle it can pick a new leader the interesting thing about paxos is the way it's defined i think i i think i put up one of the original paxos papers is kind of fun it's defined as a legislative assembly in ancient greece and it's it's a little bit obscure in the way it was originally defined and it can get pretty complicated in its normal use but google is actively using versions of paxos called multipaxos and they have been for 10 years now there is an alternative called raft which was developed at stanford by john osterhert auster haute and he basically thought paxos was really complicated and he wanted a version of a decision-making algorithm that he could describe to people easily and that came up with was raft and so that's an alternative which you could look up but none of this uh helps us with the following which is what if a node is malicious so we can deal with a node failing but if a node is actively attempting to compromise the decision making we need to do something and basically we have a couple of options here byzantine agreement and blockchains i say i'm going to talk about them next time i'm actually going to talk about them in just a moment but so there are many alternatives to distributed decision making which you can take as a key indicator that uh distributed decision making is important okay so let's actually talk about the byzantine generals problem so there are end players okay there's one general and there's n minus one lieutenants and um the idea is that one of these lieutenants may be malicious okay and what does a malicious lieutenant do well a malicious lieutenant is um going to basically do either illogical operations or much worse they're going to do operations that are intentionally designed to violate the protocol and prevent something from happening properly okay and and the um commanding general is gonna send uh attack or retreat commands and as you can imagine again this is like yes or no or commit or abort all of these sort of two-part commands and uh basically the constraints that are apply are going to be as follows all the loyal non-malicious lieutenants will all do the same thing so if you notice we've got these two lieutenants are loyal and they've all decided to attack they're both attacking now this malicious one may do who knows what but all the loyal lieutenants will do the same thing and if the commanding general is loyal as well which means he's not sending conflicting commands to people then he will also do what all the loyal lieutenants are doing okay and so that's the byzantine generals problem and so the trick here is that we want the combination of a majority of the players here in fact we're going to tell you in a moment it's going to be two f plus one of them are all going to do the same thing and uh that will be just like our uh atomicity property from the two-phase commit protocol where they either all decide to in this case attack or retreat or they all decide to commit or abort and only the malicious ones may do something uh you know totally arbitrary but they're also going to be participating in the protocol and will not be able to fool the other participants into doing something that they're not supposed to okay so that's what's tricky the question here is in the presence of a malicious player is there a way to come to a coordinated decision amongst all the non-malicious players and uh the reason this is complicated is because we don't know whether the general has been compromised or not either and so somehow even if the general is going to send conflicting orders we still have to have the um we have to have the preponderance of the uh non-malicious lieutenant still have to all do the same thing it may not be what the general asked because the general's giving conflicting orders but they'll still all do the same thing all right questions now once again leslie lamport came up with the byzantine agreement problem uh in a in a very fun paper which i believe i also have up on the readings um but you might ask yourself how this can help uh us design systems so let's talk a little bit about some impossibility results okay so i'm going to get rid of the clip art here and go to something a little simpler so one of the key ideas is you can't solve the byzantine generals problem if there's only three players okay and i'll show you why that is so here's an example of one general two lieutenants if uh if the general says it's not insane and says attack to both lieutenants and then one of the lieutenants is malicious that lieutenant may say well the general told me to retreat okay and so this lieutenant this poor guy on the left has no idea whether to attack or retreat and then and uh if you look at the situation in which the general is malicious sends attack and retreat to uh the different lieutenants so there's conflicting information and this lieutenant says well the general told me to retreat if you notice the poor lieutenant on the left can't distinguish between those two situations and really has no way to fulfill the requirements okay and so again these requirements are these two consistency things i showed you where all the loyal lieutenants obey the same order and if the commanding general is loyal then all the loyal lieutenants do what he is what he requests and so in this scenario the general is asking to attack he's loyal this lieutenant ought to do the attack but he doesn't have enough information in this case the general's malicious but the two lieutenants should be doing the same thing there's no good way for this guy in the left to figure it out either and so this impossibility result turns out is then generalized and it turns out that if you have f malicious entities then you have to have a total number of players n that's greater than three f in order to make this problem work okay and that's an impossibility uh result now a good question are the malicious nodes colluding certainly if they like to they're allowed to do anything they want in fact they can even they can even talk to aliens and uh and uh listen to elvis if they want uh before they make their decision so there are absolutely no constraints on the malicious players here okay so um and you know the whole notion of malicious as you can imagine brings colluding in as an obvious possibility so surprisingly at least it was the first time i heard about this um is various algorithms actually exist to solve this problem okay now um so the question is uh if can't you tell who's giving you the message so the answer is that um even if you can tell who's giving you the message you don't know whether they're malicious or not because a malicious player by definition can act in a way that you can't tell that they're acting maliciously so they could tell lots of different things to different people and you don't know whether they've told the same thing to everybody or different things to everybody that's why this problem is really interesting because we assume a maximally evilly malicious player who the moment you try to see whether they're malicious they behave nicely and when you and when they're in the middle of the protocol they behave evenly and you can't tell the difference all right now um so for instance various algorithms exist to solve this problem the original algorithm the paper was exponential in in the number of players n so that was clearly not practical it was an interesting proof of concept that it existed though newer algorithms um have a message complexity of order n squared that's supposed to be n squared sorry about that um there's one from mit um back in the early 2000s late 99 and um and even better yet there are newer versions using blockchain algorithms that are much more linear in uh message complexity so um the use of the byzantine fault tolerance algorithm uh basically allows multiple machines to make a coordinated decision even if some subset of them less than n over three are malicious and so you could think of this byzantine agreement algorithm i'm not going to go into great detail on it i'll be happy to i think i even put it up on the resources page let me just quickly look here if i didn't i'll be happy to to reference yeah i have the byzantine generals problem here but anyway if you think of this algorithm running amongst a lot of different nodes what happens is a request comes in and a distributed decision goes out even if there's some malicious nodes in there which are these little red circles okay and so that's a pretty powerful idea and the one downside that you might imagine can anybody think of a downside to this assume that we have everything working properly what's a downside to this particular algorithm okay slow is a good answer it turns out that it's less slow than you might think but certainly speed is a question what else n squared messaging great now it turns out like i said there are newer versions of byzantine generals excuse me of byzantine uh agreement that are done with blockchains that are more linear in number of messages so that's good as well better much better there you go good i like that hard to get a lot of good notes so imagine that the reason these nodes are red down at the bottom is somebody hacked into them okay now if all of these nodes are running the same operating system you might imagine that a really clever hacker might figure out where all these nodes are and start uh compromising them one after another and the moment you violate the uh that you can only have f um faulty nodes then suddenly this algorithm doesn't work anymore and so the only way to really make this work this is kind of what's considered the fundamental problem of this is you have to keep reinstalling these nodes and repairing them over and over again because you can't tell whether they've been breached but you need to keep reinstalling them as if they had and you try to do that faster than people can be breaching the nodes because you've got to stay ahead of that f number and so that's potentially an issue okay so let's take a different question here which is is a blockchain a distributed decision-making algorithm or not um and just to say a little bit about what a blockchain is so blockchains really uh came up in prevalence in 2009 when um when bitcoin first showed up and the idea of a blockchain is pretty simple if you've taken any cryptographic classes like 161 or whatever security classes but i'll just tell you briefly the idea is that you have a series of records and they have a hash in them a cryptographic hash over the previous record and it's stored in the current record and so that's where the chain comes from and the reason that's useful is if i know this uh spot then nobody can go back and fake out the previous spots because uh they're all hashed together in a way that's uh you can't insert arbitrary records in here and so these chains starting from a given head point pointing backwards we have the older ones in the back are basically things that can't be uh altered even though this data is stored in insecure locations all over the network now um so the hash pointers that's these blue things can't be forged that's an assumption the chain has no branches except right at the very head there might be some brief branches and the blocks are considered authentic when they have authentic authenticity info in it now for those of you that know something about signatures you might say well yeah say there's a signature here then that signature uh proves that the yellow block here is authentic and therefore everything below it's authentic in bitcoin what happens is in fact the authenticity information's a little different it's actually there's some consensus algorithm that's used to choose which one of these is ahead and in things like bitcoin and at least the first versions of ethereum the head is basically chosen by solving a very hard to solve problem this is called proof of work so you have to burn a lot of energy which they do in um huge offshore uh uh server farms these days but um you have to find a proof of work to solve a problem and then that will make something authentic and then basically the longest chain wins okay and so um really what's happening is as you're submitting new things to hap to be done they get added to various chains and then all of the different miners out there i'll show you a picture in the world i'll show you a picture in a second all of the miners are all busy trying to solve the problem first and the first one that solves it that becomes an authentic chain and uh and the chains have a tendency to re-merge afterwards okay so um i don't want to worry you with the big details of this i'll be happy to point you to some uh blockchain papers if you're curious but here's a here's a way to think about whether this is a distributed decision-making algorithm or not so spread throughout the world we have these miners with their server farms and what they're busy doing is they get they're busy talking to each other about the parts of the blockchain that aren't in question and only the heads where there's a little bit of divergence or branching are the things that are in question and what happens is uh various entities submit proposals of new transactions to the miners and the miners try to add them to the head of one of the branches and then they try to solve a problem that takes a lot of power and if they solve it first then the proposal becomes a permanent part of that branch and the other branches have a tendency because they're shorter to die off okay and so what we're really talking about here for decision making is this proposal could be something like i'd like to commit such and so data to a certain part of the file system what will happen is the miner will pack it up in a transaction put it inside of one of these transactions in the blockchain try to solve the problem and eventually it may become part of the permanent blockchain and furthermore um so the so the decision means that it's in the blockchain and so if i say commit do this right on the file system it gets committed the blockchain then it becomes replicated around the world in fact we can have observers all over the place looking at it and now that decision has been made durable in a way that it's extremely hard to destroy and so really uh you could use bitcoin to do decision making of the sort of the sense that we're talking about here okay now the question here is proof of work necessary because an individual node has no way to communicate with every other node so the reason proof of work is required is um that we want to try to make sure that uh only people who have invested a lot of time and energy are allowed to add transactions so it's really a um it's twofold it's it's an attempt to prevent people from just extending the chain uh arbitrarily any way they want because we want to be restricted to real proposals and they have to invest energy in it and uh and then the assumption is that assuming that the number of players is large enough then no one player has the uh an overwhelming advantage to add things to the blockchain and so that's how we get rid of the byzantine nature uh of the fact that these people are all untrusted but they're putting their energy in here and so that's the proof of work is basically making them put work into it they have to put real dollars into adding things um and if they successfully uh generate proof of work then um they also get a little bitcoin money back as well and so i would say the proof of work is is the way to try to make everybody behave correctly and avoid byzantine uh decisions so you can decide whether you buy it or not but that's that's a much more deeply philosophical question okay so i would say yes to is blockchain a type of distributed decision making by the way out of the realm of file systems i suppose this is bitcoin um people proposals that get put into the chain are things like i'm gonna transfer a dollar fifty worth of bitcoin which is like point zero zero five or something to buy a cup of coffee and so the proposals are actually transactions of money exchange as well okay so let's uh let's switch gears a little bit here um unless there were any other questions about distributed decision making take a pause and breath okay so let's talk a little about a bit about networking protocols now so we know we want to make decisions but we need messages to make them happen and so networking protocols are many levels and you can take a networking class to find out more but the uh what are some good examples of distributed decision making well i think i just said adding adding items to a to a uh file systems a good example of distributed decision making um uh transaction monetary transactions are good distributed decision making um pretty much anything where you want to fault tolerantly decide on something that's a decision and if you want to do it in a way that's really hard to to screw up you might want to do that geographically separate with a distributed decision-making algorithm and so there's a whole bunch of distributed decision making going on all the time in the cloud and spread across parts of the multiple continents so it's a pretty common operation anytime you wanna can turn something into an abort or commit decision and you wanna make sure you do that in a way that's hard to interrupt that's a distributed decision-making uh situation so um so there are many different network protocols you can take a networking class to figure out more about this but you know typical levels are the very physical level which are mechanical electrical signals you know how are zero and one represented by voltage levels the link level is typically what happens uh for packet formats and error control over a single hop in the network good example of that would be like in wi-fi or whatever there's the you know the wireless protocols and how how does an actual packet get from your laptop to the wi-fi access point the network level gets us questions about how do i route packets across a whole bunch of link level links to get from here to beijing that would be the network level and then finally the transport level is uh something like reliable message delivery how do i make sure that when i send something from here to beijing that it's uh done so in a reliable way that doesn't have ordering problems okay and so many protocols on today's internet and um so here the physical link layer uh is down at the bottom here and you can think of things like ethernet and wi-fi and lte and 5g and all that sort of stuff the network layer typically has ip in it okay that's our that's our big narrow uh waste that we talked about last time that's kind of the universal communication protocol uh on a global scale these days the transport layer like udp tcp these are the uh parts of the protocol that both do reliable transmission in some instances as well as transmitting from one process to another process and then above that's the application layer and those are all the things that use these underlying protocols okay so the simplest type of network is a broadcast network like a shared communication media um you can imagine a bus for instance where processor a bunch of io devices and memory are all in the same physical bus that's a shared medium the biggest thing about such a medium is you can broadcast so the processor could say something that's picked up by a bunch of i o devices wi-fi is actually a type of broadcast media as well it is interesting that the original ethernet was uh used as a broadcast network so um the the lab that i did my research in as a graduate student we actually had these uh troughs in the ceiling where a whole bunch of these cables went all around the whole floor and then they had these taps that would come down to computers that they were attached to and literally we were all connected to the same uh transmission line between the the router and and all the other computers and so um you know when you went to communicate you would start talking on that line and everybody else could in theory listen to it and that would lead to the need for collision protocols to deal with that as well and lots of examples of these broadcast media okay so i'm not going to go in great detail about this but um one exam let's talk through broadcast networks for a second so for instance there's a media access address typically 48 bits these days for the hardware interface itself and in principle every device in the world has a unique address and when uh a sender goes to broadcast a packet how does it know no it's who it's for well the packet goes to everyone but typically um it's addressed to a particular mac address okay and so the message has a header that includes uh typically an ip address but it also includes a mac address on it and a body and that gets broadcast to everybody and uh the nodes all selectively ignore the packets that aren't for them and uh only the packet only the node that's supposed to receive it actually receives it okay and this is pretty standard uh certainly for wi-fi you could imagine it's standard for multiple things on the same ethernet wire and a number of other types of broadcast communication now there's is there a shortage of mac addresses well 48 bits is a lot more bits than 32. um you know the in theory at least the mac addresses are supposed to be unique across the whole world and i think that mostly is adhered to but a lot of systems allow you to overwrite the mac addresses anyway and so um i don't know that's a good question i've never i've never asked whether the mac addresses were running out but there's a lot more mac address space than there is ipspace because 48 bits is a lot more than 32. you know the check about whether to receive or not is typically done in hardware so when you go to send something on a broadcast media and it's received the hardware card basically does the selection and only forwards packets that are really destined up into the operating system so the operating system in typical use doesn't have to look at every packet that goes by now there is a possibility if you want to snoop on a network to put some put the network cards in something called promiscuous mode and in that case you can actually snoop in on packets that are going by so uh so 168 says that there is a shortage of mac addresses is that what you're saying i i would believe that it's uh it's possible um so the mac address uh so is there any security measure uh i'm not sure i understand the question is there a security measure about uh whether people are allowed to receive your messages or not is that the question that's being asked here so uh there's no security on the message transmission layer so if you think you need security which everybody should then you need to explicitly encrypt this is why you should never log into anything unless you're using ssl properly because pretty much anybody can snoop uh and you just gotta gotta realize that's the way it is so um so the mac address is a unique physical address of the interface you can easily find mac addresses on your machine or device uh for instance if you um i'm sure you guys have all done this with ifconfig or ipconfig on windows or you pull up about on your phone you can see what the wi-fi mac address is that's a 48-bit multiple [Music] octet basically address and if you look here for instance if you do config on a windows box you can kind of see where the mac addresses are right here etc okay and so the mac address is your physical address of a physical endpoint okay now um so why have a shared bus at all why not so you could ask yourself well why should we do this sort of broadcast thing well clearly when you're talking about something like wi-fi you pretty much don't have a choice because it's everybody's bits are flying by but if uh you have a physical network you know why bother and the answer is well you don't have to it just it was just that in the original days of the network it was too expensive to do something other than broadcast media okay and so why not simplify to have point-to-point links in routers and switches and the answer is uh that's the way it does it now so point-to-point networks basically is a network in which every physical wire is connected to only two computers and so here's an example of a switch where you have a bunch of computers attached to a switch and it's a bridge that basically transforms the shared broadcast media configuration into point-to-point configuration and so typically these are like ethernet ports a switch is something you might buy at best buy or fry's or something and when you plug your machine in the switch figures out what mac address you've got and so then any communication to your mac address will be switched automatically to you without bothering anybody else and so the switch will actually transform what would have been a broadcast media into a point-to-point media automatically okay and it does that adaptively a router is a device that basically acts between a as a junction between physical networks so the switch is is faking out what we would do if we put all these on the same wire but it's making it much more efficient a router on the other hand is like connecting different wires and when we talk in a second about ip the thing that distinguishes a router from a switch is a router will take you to different subnets ultimately into the internet as a whole okay so the internet protocol which is the network level stack is uh basically provides a best effort packet delivery and so when you take messages that are going from here to to beijing for instance they'll have an ip address for your destination they'll have a lot of mac addresses along the way but those mac addresses are only good on the local wire okay and so yeah there'll be match mac addresses of your source computer and a mac address of your port into the ip network but really this green thing which is the ip address is the part that gets it from source to destination not the mac address okay and the other thing is these packets you put a bunch of packets into the network they may come out in opposite order they may come out duplicated they may come out with one of them showing up and other ones dropped okay and so this is what we call a datagram service which basically takes packets from one side and mostly transmits them to the other but without guarantee so it's a best effort service and so that is what the current internet is and we're going to have to figure out how to turn that into something that we can actually utilize for real packets so that we can do our decision making protocols on top of it so there are two spaces these days of ip addresses there's ipv4 which is still by far more common than the ipv6 which i'll tell you about ipv4 for a moment so these are 32-bit integer addresses and they're used as destinations for packets they're often written as four dot separated integers like this 169.229.60.83 okay so together these are 32 bits so this is for instance this used to be at least the cs file server i'm not sure if it still is you could also write this in hex as ox a9 e5 3c53 bottom line is this is 32 bits okay a host is basically computer computer connected directly to the internet it typically has one or more i p addresses for routing some of these may be private uh and some of them may not be public it's interesting to note that not ever uh why don't we talk about ipv5 i don't know that that exists um if it did it's uh buried in the annals of history somewhere um the uh not every computer has a unique ip address groups of machines might actually share the same ip address so um this is going to be very common these days in everybody staying at home in the pandemic they have a i don't know their comcast brings an ip address into the um into your house and then you have a router there and you have a whole bunch of phones cell phones and uh laptops and computers and all that sort of stuff are all behind that one public ip address and you have a bunch of private i p addresses and so basically uh all of the computers in your house right now are sharing the same public ip address with the rest of the world okay and the way that that works is something called network address translation where um even though your each computer has a unique local private address uh all of the traffic that goes out of the router and into the comcast network all gets translated into that single public address now the subnet is a range of ip addresses okay and it's identified by a 32-bit value with uh the bits that differ set to zero so for instance here's a 128.32.131.0 24. this basically says that um all the computers on that subnet share this prefix 128.32.131 and so that allows up to 254 or 3 probably 253 machines that are uniquely on there okay um same subnet is also like this i don't know if any of you have ever actually done any configuration of your home networks or whatever but 128.32.131.00 255.255.255.0 that's called a mask what that also says is all of the addresses in this subnet share these top 24 bits but the lower eight are assignable in any way you want okay and so the mask is basically this uh set of prefix bits so why am i telling you about subnets at all the answer is that when we're trying to route a message from point a to point b we're typically targeting a subnet and the subnet has some we're targeting some prefix of the address we're going to for the next hop okay so um routing within a subnet by mask address by mac address and the rest is uh ip so i i also just briefly wanted to say um a few ranges here um just so you know so like a class a address is one that the top eight bits of uh map to class b is the top 16 bits class c is the top 24. um it is interesting that organizations used to own say all of the addresses like this so mit for instance i know is 18 dot and then 24 bits are free so um the mit address range is quite large berkeley has uh two 16-bit at berkeley the university of california has two 16-bit class b addresses um so uh let's see what else did i want to say here so our organizations often own these so why did i mention this well these addresses are often handed out and so you can imagine that one of the reasons we're running out of addresses in the 32-bit address space is really because big ranges of addresses are already owned by organizations whether they're not in use so in addition to the fact that 32 bits is really not a lot of addresses there's a bunch of them that are just uh already owned and not necessarily available for anybody else okay so uh just to get this moving forward a little bit our packet format is like this a typical ip packet has data of course which we want to transmit there's a bunch of things in the headers which we won't go into great detail but i did want to show you here is a 32-bit source address and a 32-bit destination address so when you're sending some data or sending a packet off you build this packet you put in your address which is the source address you put in where you want to go and then you put in what protocol for instance if you're doing tcp or udp that would be in the protocol type and you send it off and it's up to the rest of the network to route it from point a to point b okay now this is a datagram so it's got data in it it's got a header and it gets sent off into the network and it neither makes it or it doesn't and there's not a hundred percent guarantee from any of the hops that it will make it so it's the function of the network is to deliver datagrams as well as possible so a wide area network now is basically a network that covers a broad area uh often called uh a wide area network could be like the whole world for instance or it could be um you know state of california what have you the wan uh connects multiple physical multiple physical networks okay so or local area networks so if you look here each one of these links could potentially be um a subnet and uh the set of mac addresses in there might uh be unique and use the route so pretty much everything connected to a subnet in here would all have a unique set of addresses but what actually happens is host a wraps up a um an ip packet and it kind of works its way up to hop to hop till it gets to the destination okay and so these things in the middle are routers which i mentioned earlier and they're basically taking you from one subnet to the next so each one of these hops is typically another subnet all right so router forwards packets from the incoming link to an outgoing link so for instance if we looked at any one of those router points we see a bunch of incoming links we see a router which is typically a special piece of hardware that uh is tuned to transmit these packets in and out as fast as possible you can think of this as a sorting network so you know it comes in on one side it gets sorted to the next hop and goes out and if these are 40 gigabit links or 100 gigabit links or whatever uh you're currently transporting here this needs to be extremely high powered hardware to do this very rapidly okay and some combination of hardware and software um so that's the forwarding idea so if you notice uh isn't that great so watch we're starting here we've got our p address of b where we're going and it's just going to get forwarded through the routers to the destination and the magical thing about this is if you think about all of the hosts in the world okay billions and billions of hosts in the world um this works okay it actually routes packets and it mostly works so that's actually pretty uh amazing to think about every now and then when you think about scale how big things actually are how many addresses there really are out there and the fact that this all mostly works is is uh i think it's astonishing i mean you can easily fig you know you can easily understand all the mechanisms in there but when you look at it at scale the fact that it actually works is pretty cool okay and so you know upon receiving a packet the router reads the ip destination address picks the next port and sends it out and that just happens over and over again oftentimes there's a default route which is if a router doesn't know where to go next it'll send it on to a router that it thinks will know where to go next so i wanted to say a little bit of a distinction between ip addresses and mac addresses so if you remember the mac addresses are used locally the ip addresses are used for these long haul communications and the question might be why well if you look here you can imagine a person this person is defined by their social security number and that's a unique person and they're at some uh address in washington dc and then they come over and they're they become um they're in california for a conference or something okay maybe they've moved to euclid avenue in berkeley so uh why don't we just use mac addresses for routing so you can imagine that we just route packets to uh to the mac address if it's truly unique it'd be like routing all mail to a social security number okay and so the question might be why not do that the answer is it doesn't scale and so the analogy really of mac addresses to social security numbers and i p addresses to home addresses hopefully is a good one for you right because when you're routing to this person basically you're using their mailing address which is in berkeley california and so the this is hierarchically routed just like ip would be first to california then to berkeley then to euclid avenue and then to 1051 and and that's how you get to the this address of that person when they happen to be there okay and so the mac address is uniquely associated with the device for the lifetime of the device the ip address changes as the person moves okay i don't know if that helps or not but this is why we use ip addresses typically to route so why does packet forwarding use ip address uh why does it scale and the answer is because if you look at what i talked about with subnets really there are prefixes and what you're really doing is as you're trying to route from point a to point b you first route some early parts prefix of the ip address and then you route more detailed prefixes until eventually you get to the subnet that has the actual final computer on it and so it scales because we can route all of the addresses uh at mit for instance could get routed by uh just matching 18 in the first eight bits and then you get to mit and then the let mit worry about routing it the rest of the way and the the analogy here is give this letter to a person with social security number blah versus give this letter to john smith 123 first straight street laus this latter one is much more of a hierarchical routing and it's much more scalable so how do we set up these routing tables well the internet has no centralized state so no machine knows the entire topology so you need a dynamic algorithm that acquires the routing tables you'd ideally have one entry per subnet or portion of address possible algorithms for acquiring routing tables you can take a networking class to hear more about this but for instance there's something where that works kind of locally you can have a routing table has a cost for each entry and what's the fastest path from point a to point b neighbors keep telling each other over and over again who they know about and you have this dynamic algorithm that converges in in reality that particular algorithm doesn't scale beyond local subnets really there's many different levels at many different scales there's a protocol called bgp that handles uh global routing and it has a way of exchanging routing tables that adhere to certain uh policy reasons and so on and so that that process of making the routing tables so the routers can do their jobs is in itself a really interesting distributed algorithm which is occasionally unstable there have been some really interesting outages in the internet over the years where bgp got stuck with some loops or there was some key link in the network that went down and there was no way to route around it and the routing tables became unstable and so this is this is in itself an interesting problem that we're not going to study anymore but i wanted to mention it and so really if we just say that in another slide back here really when we look at this slide we're trying to get from a to b the question is at each hop how does the router know what the right next hop is based on where you're trying to go those are the routing tables and those routing tables uh are the big dynamic algorithm uh that i just mentioned okay so the last topic i want to see if you guys give me a few more minutes and then we'll uh we'll talk uh we'll pick this up on monday is um naming is a big issue okay so if you look people like to use names for things but addresses are what the underlying system likes to use okay and so when i'm trying to send something to this guy i want to find out i have to find out what his address is i got to look him up somehow and basically the way that works in the internet as you're well aware is you're taking names like www.berkeley.edu transforming it into an ip address 128.32.139.48 and things like google actually when you look up google.com you get a different address possibly if you do this several days in a row you're going to get a different address or you're going to certainly get a different address if you're different parts of the world or the country because that common name gets mapped to a bunch of different servers but anyway this process of mapping a human readable name to something that can can actually be routed is something that needs to be done and because i p addresses are really hard to remember and they also change okay and so the mechanism is the domain name service dns which i'm sure you've heard of it's a system that's been around for a long time and it basically defines domains hierarchically so for instance this machine ecs.berkeley.edu is a domain there's the wwe which is a particular machine and that domain eecs.berkeley.edu is referenced off of the berkeley.edu domain which is referenced off of the edu domain which is referenced off the top level okay and so there's a hierarchical lookup process for dns to work your way down turns out backwards right if i'm trying to find www.ecs.berkeley.edu i start at the top i go to edu then i go to berkeley.edu and then i go to eecs.berkeley.edu referencing the lookup right and so dns is a hierarchical mechanism for naming each domain is owned by a different organization the top level is actually handed out by an organization called the internet corporation for assigned numbers and names or icann and you typically have to get assigned these domains at the top level and you have to pay for them and the resolution of this is if i'm over here or i'm somewhere else in the world and i'm trying to look up www.ecs.berkeley.edu i go through a hierarchical set of queries to the dns system to get that number and then the network takes over okay and because this is a long process dns is cached in lots of ways and so if you look something up because you're browsing the web that result will be cached in your machine for a while until until the cache expires so remember everything in operating systems is a cache you guys can quote me on that because it's true so how important is it to correctly resolve the mapping from named ip address well you can imagine the answer is very right so if an attacker manages to give you an incorrect mapping and get somebody to route to a server thinking they're routing to something else they might do the wrong thing okay and probably many of you have at one time or another gotten a complaint that uh the certificate is not valid when you were trying to go to a website and you probably all said oh just ignore it but in fact there is a real attack problem here where uh somebody manages to convince your local dns server to give you a wrong machine and they're redirecting your attempt to log into the bank to the wrong server and they're trying to get your password and ultimately your money so this mapping between names and ip addresses is a security hole now you might ask is dns secure mostly it's it's a weak link and uh it turns out that there's been various holes in it over the years there was in fact a really famous one in 2008. you guys can look this up look up damn dan kaminsky he discovered an attack that basically broke dns globally because what it was was it was a way of responding from pretending to be a dns server that somebody was querying and doing it fast enough that you could convince a whole chunk of an isp to give the wrong mapping to uh to a lookup and then everybody that happened to be logged into the isp at that time would get the wrong lookups and you could do this regardless of the security on the dns servers and needless to say this was bad but what dan did was he actually contacted all the major vendors uh of software and explained what was going on and got him to mostly patch it before it was announced in a paper but if you if you google that look it up it's uh you know it gives you an example of what could happen all right so i'm going to end for now we've run out of time but we talked about two-phase commit as a good instance of decision distributed decision making first you make sure that everybody guarantees that they will do the same thing they'll commit if they're asked and next ever ask everybody to commit if that doesn't happen then everybody's going to abort okay that's the important part we talked about the byzantine generals problem in some detail which is a distributed decision making with malicious failures one general n minus one lieutenants some of the number of them may be malicious we often call that f and we need to have a total number of uh general plus lieutenants that's greater than three f plus one to make this solvable we talked a little bit about blockchain protocols they basically are a cryptographically driven ordering protocol and we talked about how blockchain is really a type of distributed decision making um we started talking about the ip protocol we'll finish up the little bit that i'm going to talk about in this class next time but it's a datagram packet delivery service using route messages across the globe 32-bit addresses 16-bit ports we'll get to that a little bit more when we go forward we talk more about ports dns is a system for mapping from names ip addresses uh which needs to be secure because humans uh aren't that good at remembering ip addresses in general all right and we'll talk about uh ordering reliability in tcp next time so um i hope you all have a great weekend um those of you that are in the berkeley area i don't know i think it's going to be cold and rainy but uh anyway stay safe and we will see you next week you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_14_Memory_2_Virtual_Memory_Cont_Caching_and_TLBs.txt | welcome back everybody to cs 162. so um we're going to pick up where we left off talking about virtual memory and uh memory mapping and then we'll continue with paging next time but if you remember we were looking at this idea in general of address translation and the memory management unit that does it and in this scenario where virtual addresses are coming out of the cpu they get translated by this memory management unit into physical addresses which represent the actual uh positions of the bits in the physical dram so there's kind of two views of memory there's the view from the cpu which is the virtual addresses and the view from memory which is the physical addresses and those two um are basically related by a page table which is what the mmu supports now the one thing that we did talk through last time is we actually gave you a couple of examples where we walked through some um instruction execution that was going on in the processor and we kind of showed you when you keep to the virtual addresses and when you actually have to translate into physical ones uh so with translation it's much easier to uh implement production protection because if two processes have their translation table set up so they never intersect with physical memory then it's impossible for them to interfere with each other through memory an extra benefit of this of course is that everybody gets their own view of their personal address space which means that you can link uniquely a program to the you know once and run it multiple times on the same machine okay everybody gets their own zero is the way i like to think of that we talked about simple paging in this context and the idea of simple paging is basically that there is a page table pointer and that page table pointer points to memory which is uh a set of consecutive translations we'll call these page table entries a little bit later and these uh consecutive entries basically have both a physical page number and a some permission bits okay and there's one of these page tables per process all right and so the way the virtual address mapping goes we talked about this is you start with an offset which is how big your page is and so for instance for a 1k page that's going to be 10 bits for a 4k page that'll be 12. and then all the rest is the virtual page number and that offset never gets changed by the page mapping so that's copied directly into the physical address and then the virtual page number is basically used as an index into the page table you look that up and that gives you the physical page number which gets copied in and uh you're good to go on the physical address so for instance uh if you had uh 1k kbyte pages or one of two four pages um there's 10 bits of offset and what's left in red is basically 32 minus 10 bits or 22 bits in a 32-bit machine so there's essentially uh up to four million entries in this page table all right and so among other things we're not necessarily going to use them all and so we need the page table size and so there are certain uh indices in here or virtual page numbers which are above the page table size in which case you get an error and then we also need to check our permissions so if we try to do a write and you see that this particular page is marked as valid that's v and read but not right this is essentially a read-only page so if you attempted to write to that address you'd get an error okay were there any questions on this simple paging idea now we're saying we're talking about the function of that memory management unit i showed you earlier okay so the the hardware there is going to help us by translating these virtual addresses into physical ones so the other thing i did is i gave you a very simple example this is almost a silly example because it's four byte pages but because they're four byte pages you know that the offset's only going to be two bits and basically what we can see here is that a virtual address zero zero we write that all out into uh into binary basically the lower two bits are zero the upper two um upper six bits in this case are zeros and so we take what's in red here and that's going to be our virtual page id which we'll look up in our page table and what we see there is that there's a four okay and i don't have any permission bits in this example but that four represents the translated page okay and so i take that four that's really zero zero zero one zero zero okay that's the physical page id i copied the offset and that told me that things that are up here in uh the virtual address space are gonna be down here in physical space and we looked at uh multiple options here as well like for instance things from four to eight map basically to page three which is down here things from 8 to c are going to map to the green up here and that's going through the page table all right and then of course if you look at things in the middle like for instance 4 here has got an e in it uh five it's got an f six has got a g where does that translate over here well we can see that that's gonna be right um basically over here but the question is how do we get there well we take the fact that six is zero zero zero zero zero one one zero because one one zero is six as you all know and um that means that we're talking about page one which is the blue one the offset is one zero and that takes us over to this point oe and similarly 9 takes us over to this point 05. okay now there's a question here is page 0 always unmapped so that dereferencing null pointers always cause page faults no not necessarily because sometimes uh page zero is uh reserved for the operating system and can represent um different things like i o and so on so zero is not always unmapped um it often is but you can't necessarily be assured of that and that's why in fact null references can be very bad because some languages like c let them happen um okay so but you could if you can afford to unmap zero then uh obviously you get a little bit of extra protection there because you could cause a page fault uh because zero wouldn't be and wouldn't be valid in that case so what about sharing so um first of all actually let me stop here for a second are there any pieces of this that people are worried about i told you last time that um you need to get really good at transforming between hacks which is four bits at a time and binary um i'd get to memorize that quite well because that's something that'll serve you well if you know how to do it all right um good now what about sharing all right so once we start having this mapping now we can do some pretty interesting things okay here's a question let me just answer this why are we taking the top six bits when there's only three entries in the page table well because presumably this is an 8-bit machine and therefore everything that's not an offset remember i said these are four but by pages are basically the uh page id the virtual page id and so this page table needs to uh potentially have up to 64 entries in it but because the page table uh only has three entries then the the size of the page table is going to be set at three okay and if you notice back here in my previous example i showed you this idea that you have a page table size and so what that really says is there are some virtual addresses in this scheme that are not valid they're ones that are for which the virtual page id is too big all right good question okay but we have to take six bits because we have to take all the bits that aren't the offset now uh if you look here so here's another example we have our virtual page id in the offset and what's interesting about this scheme is that now we can do something like this virtual page number which is going to have a 2 in it it's going to be 0 002 here might map to a place in physical memory okay and um we might have a second uh page table that also maps to that same place in physical memory so now we have two processes two separate page tables both mapping to the same physical page okay so this is interesting right this basically means that that physical page now appears in the address space of both processes so they can share information all right so if uh process a writes to uh something in page two it'll show up in this page if virtual if process b writes to somewhere in page four it'll show up in this page and they can read and write each other's data all right now this is not a great mapping okay why well because i mapped the same page to different parts of the address space for these two processes so in fact if you look in process a i can read write an address 002 x66 and b i can read at address 004 xxx and so the addresses are actually different all right which means that i can't make a linked list here and have the addresses mean something between the two processes so that's a little broken so in fact it would be better to actually link them to the same place now there's a good question in the chat here about can you arrange to set that up and yes there are virtual memory mapping system calls that allow you to map the same page to the same part of virtual memory and thereby make sure that you can do things like linked lists that are shared between multiple processes notice that the other thing that i've shown you here is that process a has both read and write permission to this page while process b does not and so that might be a producer consumer scenario where process a is producing something and process b is consuming it and of course once you've got shared memory then you need to synchronize and we get back to the synchronization we've been talking about all right questions can everybody see why i'm talking about all the addresses being of the form 0x0002 xxx and 0x0004 xxx okay why why am i saying that process a has addresses like zero zero zero zero two x x x okay yeah so this corresponds to virtual page number two and number four and if you notice the this is hex right so hex represents four bits so i have xxx in this instance there are um 12 bits total so i'm talking about a 12 bit offset which means a 4k page okay in this instance and then all the other bits the the remaining ones above are going to be um the ones that i use for my virtual page number and so also get comfortable with figuring out what's the offset and then what's left over is a virtual page number all right good and of course if i if i map the page in the same place in both of these then the addresses would exactly match and then i could make a linked list or something okay so what's a typical offset nowadays that's a good question so 4k 12 bits very common um some of the higher end machines might get you to 16k okay but um 4k is is pretty common okay or 12 bits now where do we use sharing all over the place so remember we started out this term at the very beginning saying we needed to protect address spaces from each other so their processes were protected from each other and the kernel was protected from the processes but we have this sharing mechanism and i like to think of sharing as selective punching of the the careful boundaries we've put in processes in a way that does the kind of sharing we want so for instance the kernel region of every process has the same page table entries for the kernel okay and that allows you to basically pop in and out of the kernel um to uh without having to change any page table mappings okay so the process is not i'll show you this in a second the process is not allowed to access it at user level but once you go from user to kernel like say for a system call now the kernel code can both access its own data and the user's data okay but if it wants to access data from other user processes it's going to have to do something different at that point if you want different processes running the same binary we talked last time i was accused of starting a a culture war but if you want to run emacs multiple times for instance or vi if you want to be a west coaster then um you can have the same binary stored in a set of physical pages and then multiple processes can link to that binary and you don't have to waste memory with duplicate code okay that's great and that's that extends to dynamic uh user level system libraries you can also make that be uh shared only read only excuse me and then everybody can share them and so obviously the last one is the one i was just showing you which is sharing memory segments between different processes allowing you to essentially share objects between different processes and thereby do you know interesting communication now of course you got to be careful about that because the two processes are now trusting each other to put data in each of those you know in that shared page that is properly formatted and can be properly interpreted by the other process okay so that's a little bit less secure potentially unless you're very careful so um now we can do some simple security measures also with this like for instance we can randomize where the user code is rather than always starting it at a particular part in virtual address space we can start it in different parts and that randomization which i'll show you in another picture helps to make it harder to attack when you've got certain things like overflow errors and so on which you might have heard about if you've taken 161 it also means the stack and the heap can start anywhere again for security reasons and then we can also use kernel address space isolation where we don't map the whole kernel space which is part of it and that can give us a little more security notice that when we talk about meltdown which we will mention in a subsequent lecture we have to in fact make sure that essentially none of the kernel space is mapped into user space but we'll get to that a little later but if you look at this scheme i've got here with user space and kernel space what this means is that because of bits that are set in the page table entry when i'm at user mode i'm not able to actually access any of the kernel page table entries even though they're mapped they're not available to the user but the moment that you take a system call now suddenly both the user's memory and the kernel memory are all available to the kernel and this makes it much cleaner and simpler to do a system call so here's a typical layout i actually showed you this last time but we see a bunch of holes in here and these holes are basically allowing us to do randomization and thereby making it harder to put executable code on the stack and a few other things and harder to attack and so that's a good security measure okay but all of these holes are whole things that we need to support and of course unfortunately so far with our page tables we don't have a good way to support holes because in order to go from zero up to fffffff we need to have all of the page table filled 100 filled and a lot of these empty spots are just going to be null entries that say you know invalid and that's a waste and so we need to do something different and that's part of our topic next okay questions okay so right now the page table i've showed you this is answering a question in the chat doesn't actually allow you to spread everything around without wasting a bunch of entries that are null okay so um right now in order to map this virtual space i would have to have all of my entries but um a bunch of them are going to be empty and that's a waste and so we're going to fix that okay and you're right we can map around in physical space any way we want but virtual the virtual part of this is is wasted yet so just to summarize i just wanted to give you a little bit here's an example we have to have here's our virtual memory we've got all these holes that means the page table has to be a hundred percent full okay so those advantages that we might have by setting the length of the page table to less than the full size we lose it because we have to map the whole page table and that's because we need to have things like the stack at the top of the page table and things like the code near the bottom and so that's a waste the other thing i wanted to show you here is this virtual memory view goes through the page table and maps to data that's potentially spread all over in the physical memory and i'm even showing you some gray things here which represent other processes and so that scrambling of the physical memory is a big advantage of page tables because now we can manage it much easier because every one of these pages is exactly the same size and we can allocate or deallocate them any way we want the other interesting thing i want to show you here is here's the typical stack grows down heap grows up if you notice in this case the stack only currently has two pages associated with it that are actually mapped the rest of these entries like the one right underneath the stack if you look over here in the page table is currently got a null entry in it okay and that null entry means there's nothing mapped in here so the moment that we get to try to go below that stack and suppose we're just pushing things on the stack and we hit this point we're going to cause a page fault which we'll talk a lot about next time and at that point we can actually add some more memory okay so if the stack grows we just add some more uh stack and now all of a sudden we've got more stack and so what's great about this is that we're able to start with the smallest amount of physical stack that we can and will grow the stack as a process needs it so we don't have to commit physical resources to the stack because the page faulting lets us grow that dynamically as we need it the page table base register is actually going to be in cr3 in the x86 processor and so i'll show you that in a moment um okay challenge now just to summarize what i've been saying here is the the table size is equal to the number of pages in virtual memory so if you were to count up the number of potential pages even the empty ones that's the size of our page table in entries and that i'm the thing i'm saying is really unfortunate about what i've told you so clearly what i've told you isn't really the full story okay and that's our next topic so how big do things get all right so let's talk about size if we have a 32-bit address space by the way i'm gonna go through these just to make sure everybody's on the same page with their powers of two okay this is you are now uh uber os students and so you need to know these things and you'll know them well so for instance in a 32-bit address space 2 to the 32 bytes or 4 gigabytes okay and notice that i've got g capital g capital b so a lowercase b means a bit a capital a capital b means a byte okay that's eight bits for memory all right kilo is not a thousand it's two to the ten which is a thousand twenty four that's almost a thousand but not quite now i think in 61a or something they might have called this a kibby okay and kibbies are great if uh although they always sound like cat food to me but they're great if you've uh if they come for you okay a kooby bite i don't know how much a kooby byte is it's really big um m for mega is 2 to the 20 which is almost a million but not quite okay it's more than a million um sometimes called a mibi g for giga is 2 to the 30 not quite a billion sometimes called a gibby the thing that you need to know is that when you're dealing with memory you need to sort of mentally translate km and g into the powers of two rather than powers of ten because um people don't always give you kimi and gi in fact they far fewer than um they do it far uh less often than you might like okay and um and it might be maybe with an e uh that's true um so the other thing that's a little confusing about this by the way is that when you start dealing with things like network bandwidth and you say kilobytes per second that is a power of 10. okay and so this unfortunately this terminology is very confusing and um and i just want you to be aware of the confusion because you're going to run into it as you go so typical page size as i said was four kilobytes which is uh how many bits well if two to the tenth is a thousand twenty four then four kilobytes is an extra two bits because two the second is 4 and so that's 12 bits okay how big is a sample page table for each process well let's look at this if a page itself is 2 to the 12th in size bytes and they're 2 to the 32 total i just divide the 2 which means i subtract the powers and i get 2 to the 20 which is about a million entries and they're going to be 4 bytes each i'm going to show you what the entries look like in a moment so that's about 4 megabytes would be wasted in a page table where a lot of them are are empty so we're going to need to do something different so when a 32-bit machines first got started this is things like the vax 11780 the intel 8386 et cetera 16 megabytes was a lot okay and so four megabytes was a quarter of all memory so this is clearly not something we want to do all right um and just to hammer this home so how big is a page table and 64-bit processor all right so 2 to the 64 over 2 to the 12th is 2 to the 52nd which is about uh 4.5 exa entries 4.5 times 10 to the 15th they'd be eight bytes each which is 36 exabytes in a single page table all right that's clearly a waste and so um this page table thing that i showed you i'm calling it a simple page table it's clearly not what we want this is just a lot of wasted space all right questions so the address space fundamentally sparse remember all those holes i showed you and so we want a layout of our page tables that handle holes well and really what's a page table so um let's think about this what do you need to switch on a contact switch well you just need to switch the top pointer so that's easy in some sense to the address space now what is not so easy is oftentimes you have to flush a bunch of tlb entries and so on so switching the address space can be more expensive than just switching the pointer now what provides the protection here well translation per process and dual mode execution so what that means is only the operating system is able to a install the um to install the page table pointer and only the kernel is allowed to change the page tables okay because we can't let the process alter its own page table now the question about is the processes page table stored with its pcb typically it's a different part of memory it's kind of like on the kernel's heap in some sense because the pcb has sort of got pointers to everything but it doesn't necessarily contain big things like page tables that's a good question though it could in principle it often doesn't but some analysis here is the pros of the page table thing that we've come up with so far is it's very simple memory allocation because every page is the same size it's easy to do sharing the cons are if the address space is sparse which it is then you start wasting a bunch of entries if the table's really big now the problem is that you're not running every process all the time and so you're wasting a huge amount of memory and it'd be really nice if we could have an actual working set of our page table and so you can see that we're going to stray into caching very quickly here all right so the simple page table is just way too big and we don't want to have to at all in memory etc and so is there something else we can do and maybe we could make our table have multiple levels in it and so that's where we're going now is the uh does the page table also specify whether something's accessible to the user or the kernel yes and there's a bit in the page table entry i'll show you that in just a second so how do we structure the page table well a page table is just a map or a function from virtual page number to physical page number all right like this right virtual address in physical address out and so there's nothing that says that this just has to be a single table um if it is a single table it's very large just ridiculously large as we just showed what else could we do well we could build a tree or we could build hash tables okay you you think of it uh we could come up with it and so um one fix for uh the sparse address space is the two level page table idea and i wanna show you what i like to call the magic page table this is a fun one you'll see why it's magic in a moment but this is for 32-bit addresses and it's a t tree of page tables where uh we have 4k pages so 4k pages means 12 bits of offset and four byte page table entries okay so these are four bytes total and i'll show you what's in those four bytes in a moment but what that means is that we can take the virtual address we have our 12-bit offset and two 10-bit indices and the first ten bits goes to the first level page table and it's used to select one of a thousand twenty four entries which there will be because four k bytes divided by uh four bytes is a thousand twenty four and then the second one we'll actually pick the second level and that will give us the physical page number and of course we copy copy the offset there okay and so the tables in here are all fixed size right and in particular they're all 4k bytes in size so these page table sub entries are four kilobytes this one is four kilobytes the pages themselves are four kilobytes so what's cool about everything being four kilobytes is now we can start talking about swapping parts of the or paging out parts of the page table to disk and so only those parts of the page table we're actively using even have to be in memory okay now of course the top level one always has to be there if that process can run but there's a whole bunch of other ones at lower levels that don't have to be there okay so the tables are fixed size on a contact switch we just have to save this single page table pointer in the pcb for instance and the page tables themselves aren't necessarily stored in the pcb but that page table pointer is the address space descriptor and just by switching that out possibly with flushing tlbs we'll get to that later um is enough to change the whole address space of the machine and go from one process to the next okay now the valid bits on the page table entries i sort of indicated we could pay we could swap out the pages but what did i mean by that well if you if you look at this situation where we take 10 bits we look it up in the first level page table if we had this second level page table if that wanted to be out on disk we could actually mark this first level as invalid and then what would happen is we would uh try to look up this virtual address those 10 bits would look up the first page table entry we would see it's invalid we'd cause a page fault that page fault would then get resolved by the operating system by bringing the next level page table in we'd retry and now the first 10 bits would work the second 10 bits would get tried and maybe this one would be marked invalid in which case we pull the actual page in from disk and then we finally are able to actually do the reference now that sounds really expensive because the disk remember access is a million instructions worth but because of a sort of a caching view of the world we only do this once and then the multiple one times that we do that afterwards everything's faster okay all right questions now good question is the information about how the page table is structured built into the hardware yes all right so that typical machines these days like the memory management unit i showed you that would be on the x86 have a particular structure for the page table built into them okay and it's there by the same for uh pretty much all processes that are actually running on a given machine at a given time now some machines like mips processor line and basically the things that were related to them actually do something a little different where they don't have hardware that that walks its way through the page table or does what we call a page table walk they actually have software and when you um try to access something that's not in the tlb which we haven't heard about you'll actually trap the software and then the software can pretty much structure the page table any way they want but the page tables you're dealing with now with pintos on the x86 that's a hardware page table walk and so the structure of the page table is absolutely built into the hardware all right now um here is the classic 32-bit mode of an x86 i just wanted to show you this um so the intel terminology rather than saying there is two levels of page table they actually cause that i call that top level a page directory but you know that's just a terminology thing but essentially you have the cr3 which is the register only accessible to the kernel that defines the top level page table it points at the page directory we take 10 bits off of the address pointed at that page directory that gives us the next page table okay so that's actually going to give us a 20 bit pointer to the next page table in physical memory 10 bits come out of the table the the next 10 bits come out that looks up the next page table entry that'll give us 20 bits that represent the physical page and then we combine with the offset and that gives us the actual final address we're looking at okay now i just threw something at you very quickly but let's see if we can understand this if you look at the way addresses even the physical ones are structured there's a 12 bits of offset and 20 bits of either virtual or physical address so that means that when i specify a physical page i have to give 20 bits of unique address to specify that physical page and then the offset the remaining 12 bits can be anything we want but that physical page is defined by 20 bits which means that this page table entry has 20 bits of physical address in it okay all right so some adventist trivia uh midterm tour is coming up um thursday 10 29 so topics are going to be up until lecture 17. so we have some good topics for the midterm we've got scheduling deadlock address translation virtual memory caching tlbs demand paging and maybe a little bit of io so first midterm was somewhat of a dry run this next one will actually require you to have your zoom up and working uh when you're when the ta proctoring ta pops into your zoom room um you need to have things going so just be aware that you should make sure you get your setup going things are going to be almost the same as they were last time uh except which worked reasonably well except for the fact that i think we're going to pre-generate all your zoom rooms for you and then you're just going to be connecting to them but uh watch for that and anyway we want to make sure that your setup is debugged and ready okay there will be a review session we don't have any details on that yet but we'll get out the zoom uh details on that and uh the most important administration that i wanted to say is you know the u.s election is coming up for those of you who are citizens or have the ability to vote absolutely vote this is the most important thing that you can do as a u.s citizen and um you need to do it and actually if you don't do it then it's uh not you know you don't get to complain about the results but i would say don't miss the opportunity and of course be safe if you can vote by mail do that otherwise wear a mask um and social distance but be be careful okay um but i would say um without being political that this is potentially the most important election in a century so don't miss it all right um yeah and those of you in california don't go to any of these fake ballot boxes there actually are a bunch of them you can go to the post office and what's even better you can um sign up to find out about the status of your ballot there's a there's an online thing to do that i did it it's awesome it i got a notification a text the moment the post office found it scanned it and then when it got to the destination it said it will definitely be counted so you get a text every time something happens so be careful of the fake ballot boxes thank you for that ashley all right um good so what is this page table entry of which i s of speak here okay it's basically um it is the entry in each of the page tables and it's potentially a pointer to the next level page table or it's the actual page itself it's got permission bits like valid read only read write write only okay and so i'm going to give you an example of the x86 uh architecture um the address is the same format as a previous slide okay so this is going to be for the magic 10 10 12 bit offset um intermediate page tables are called directories for x86 but here it looks okay so it's 32 bits or 4 bytes and if you notice there's 20 bits of physical page number because remember i said you needed 20 bits to uniquely identify a 4k page and then the remaining 12 bits are um interesting okay the lowest bit here is the um presence bid okay most everybody except for intel calls it the valid bit intel likes to name things differently than anybody else so they call it the presence bit but the same idea if there's a one that means that this page table entry is is valid and you can go ahead and do the translation if it's a zero it means it's invalid and all the other bits all 31 of the remaining bits are essentially free for the software to use and that can be an interesting way to keep information about where that page really is if it's not valid and mapped in memory so that's the the present bit the writable bit w actually says whether this page is writable the u-bit um basically is uh is this a user or kernel uh page and so if it's uh zero it's uh it's a kernel page to one it's user um i believe i may have that reversed look it up in the spec then we have some things about caching so the uh pwt and pcd are whether there's no caches allowed or not so page write transparent means you write straight through the external cache and pcd means the cache is disabled these two things are important when we start talking about memory mapped io so um we'll talk about that in a few lectures um a says uh whether this page has been accessed recently or not and that gets reset by software but set by hardware d is whether it's dirty which gets reset by software and set whatever you do a write to that page and then this ps is going to give you a page size so if you set this to zero it's exactly the 10 10 12 i showed you if you set this to one then there's only one level of page table and you can get four megabyte pages out of it which you might use for the kernel okay questions now what can you use this for we'll talk more about this uh next time and the time after but invalid page table entry where the p bit is zero for instance can imply all sorts of things one is that the region of address space is actually invalid so there may be a hole in the address space that is never going to get filled okay and in that case a page fault will occur and potentially the process will be faulted the other option is that well it's not valid right now but the page is somewhere else and so potentially go out to disk to pull it in and that means that after the page fold happens the kernel will reset the page table entry such that the valid bits now won and then you retry the the loader store and at that point it will go through okay the validity portion is checked always first and so that means the remaining 31 bits can be used by the operating system for location information like where is it on the disk for instance when uh page table entry is invalid so a good example is demand paging okay this is the simplest thing when you hear about paging um right off the bat demand paging means that we only keep the active pages in memory the rest of them are kept out on disk and uh their page table entries are marked invalid and so now rather than having to swap a process out like we talked about a couple of lectures ago by sending all of it all of its segments out to disk now we can send just pages that aren't being used out to disk and we can get much more efficient use of memory that way another interesting one is copy on write so we talked about unix fork multiple times and the interesting thing about unix fork if you remember is when we fork a new process both the parent and the child in that case have a copy of the full address space and we talked about that rather than being so expensive that you copy everything what you do instead is you copy the page tables you mark them all as read only and the moment either the parent or the child tries to write then they will get a page fault and at that point we'll copy the pages and make two copies it's called copy on right okay another is zero fill on demand you can say well all of these pages are going to be zero because we want to make sure that we don't accidentally reveal information from the previous process and use that physical page what we do there is we mark the page as uh invalid and the moment you try to access it you get a page fault and the kernel zero is a physical page for you maps it and gives it back to you and that's a zero page on fill okay and so we're essentially doing it's like late binding for those who have taken interesting language classes in cs we're kind of late binding our zero fill and our copies okay so here is a another example that's kind of interesting of sharing so i'm showing you two processes with page table pointer and page table pointer prime the important thing to see here is the green part and notice that we're actually saying that a couple of whole sub pieces of the address space are shared okay so the question about does that mean that zero filling pages doesn't actually delete the information from physical memory well at the point that you hand it over it does overwrite it okay so make sure that everything is fully written and you don't have to worry about what happens before you do that overwriting because you can't even read it it's marked as read-only is invalid and so the moment you try to read you get a page fault and then the kernel fills it with zeros and gives it back to you so no worries about your secret keys at that point okay so we can share whole sub-pieces all right and you can imagine that perhaps a whole big chunk uh might represent the user's space and you could have a user page table and a kernel page table that just had um user plus kernel entries in it and they mostly share the whole page table and this is going to be uh useful when we talk about the meltdown problem okay but uh we'll uh we'll talk about that later all right so for two level paging very simple okay just like before here we have an address um and the first three bits are used to look up the first level page table the second three bits look up the second uh level page table and then you get the uh the final actual physical mapping and the point is that this particular slide is showing you virtual memory uh position mapping all the way to physical memory just to get a better idea how that multi-level mapping goes okay and notice that we did we do copy in this case uh zero zero zero gets copied to the offset okay so in this case in the best case the total size of the page table is approximately equal to the number of pages used by the virtual memory so this page table is there's not as much wasted space as a single page table because if we have big chunks of no of non-mapped space what we do is we put a null in the top level page table and then we don't even have to have the second level page table so we save a whole bunch of space in the page table when we have sparse tables okay now we can take this this is like a meme right we can make multi-level pretty much anything we want and if you can think of it it's been done so what about a tree of tables so the lowest level page table might still be uh to pages and map with a bitmap like we talked about the higher level might be segmented and you could have many levels so here's an example where um i take the virtual address i split off some segment id at the top and i have a page number and then i have an offset okay and so i copy the offset always do and now the virtual segment number goes to a segment table and that gives me a base which is in memory for a page table in which case i use the virtual page number to look up the page table entry and that gives me a physical page number and of course for all the reasons of sparseness i talked about what you're really going to do is you're going to have a segment number and then two levels of page table to deal with sparseness okay and then we're going to check for access errors like is it valid um is it writable or not okay and so there's various places i can get errors what do you have to save and restore in the context switch here remember for the simple page table only we just have to save and restore the base in a segment situation typically as i said a few lectures ago these segment registers are stored on the processor and in that case you got to save and restore the segment registers during a context switch so this is a little bit more expensive now you might say wait a minute why are these segment registers not stored in memory simply because there's such a small number of them they're typically just stored in the processor okay because it's much faster than going to memory and you're only paying the costs when you do a contact switch okay what about sharing complete segments well you know this is i'm giving you you know obvious things this is par for the course but you can have the virtual segment number of process a and the virtual segment number of process b both point at the same chunk of page table and now they're essentially sharing that all of the pages that are in that page table amongst these two processors okay so the cool thing about the flexibility you get out of these mapping schemes is you can do whatever sharing is appropriate the key there being that you're you're punching these holes in the protection afforded by processes you're punching these holes carefully so that you're only sharing when you want to rather than sharing and not knowing that you're doing so okay so the pros of the multi-level is you only need to allocate kind of as many page table entries as you need for the application i'm going to say that approximately right and that's basically gives you a way to have sparse address spaces it's easy memory allocation why is it easy memory allocation because the pages are all the same size okay and so it's really easy to put those pages on a free list in fact you don't even have to put them on a list you just have to have a very large bitmap it's easy sharing and just showed you many ways of sharing okay cons are there's a pointer per page typically 4k uh 16k pages today um if you got a 64-bit address space i'm going to show you this in a moment the page tables still add up even when you've got multi-level page tables okay um and these page tables need to be contiguous so that means that each of the sub pieces have to be contiguous um but that's okay because we're allocating things in 4k at a time right and so in the 10 10 12 configuration the page tables have been set up to be exactly one page in size so that the same allocation can be used to allocate both the page table entries and the pages themselves now the other con which we haven't addressed yet is i've slipped something uh underneath here without you guys realizing it these this looking up multiple levels of translation there's time involved okay this is not magic it's hardware um and so every level requires cycles to go to dram to look things up okay and so how are we going to possibly deal with that because that just seems like i've turned something that was fast loads and stores to cash and i've turned it into something slow what am i going to do anybody have any ideas caching exactly tlb is a type of cache exactly now um we're gonna we're gonna use caches all right and um they're uh for the person who asked about virtual caches these are all gonna be um caches where the index uh in the tlb is going to be virtual but for the the actual data caches are going to be physical and i'll try to mention that when uh when it gets there all right now if you remember for dual mode operation i just want to toss this out again can a process modify its own translation table no because if it could all of this protection's gone right only the kernel should be able to modify a the tables themselves and b which tables are in use that you know setting cr3 can only be done in the kernel so and to assist with protection hardware is giving you the dual mode right we talked about kernel mode versus user mode and um even though in x86 there's four of them we're really only using two and there's bits and control registers that get set as i go from user mode to kernel mode and back okay just remember this all right and in x86 there's actually rings where ring zero is kernel mode ring three is user mode and sometimes the ones in the middle are used uh when you're using virtual machines okay and there is some additional support for hypervisors which we'll talk about in a later lecture that sometimes people call ring minus one or something like that all right so summary of all of this now that we're um know more about virtual memory mapping is that certain operations are restricted to kernel mode things like modifying the page table base register can only be done in kernel mode page tables themselves can only be modified in kernel mode okay now um there is a question here about can we use virtual caches and um avoid some of the fast trend some of the slow translation problems and the answer there is we could except that virtual caches have all sorts of consistency problems with them and the simple way to see that is that since every process has its own notion of zero the moment you put in pro virtual caches it means that if you try to switch from one process to another now you got to flush the cache because uh the the notion of zero for the first process is different for the notion of zero for the second process and so virtual caches uh are not used very much these days because they have that very complicated mess involved in having to flush them when you switch from one process to another okay now here let's make this real for a second here's the x86 uh memory bottle with uh segmentation so here we have a segment selector all right and typically you get that segment selector out of the instruction this is for instance the gs segment okay and that segment selector now gets uh looked up in a table and that table is combined with an offset to give us a linear address and now we combine we have that combined address which is a linear address uh 32 bits and now we take that linear address and we look it up um so this is the virtual address space as set by the segments and now we go ahead and we look up the first page directory the page table oops and uh we look it up so we actually have a segment followed by uh two page lookups okay all right um and uh the the thing about virtual caches is yes it's expensive which uh computer architects hate expensive operations because they slow everything down okay now i just wanted to show you this is very briefly to say a little bit more about what's in a segment in x86 so segments there are six of them typically the sscs ds es fs and gs segments they look like this so there are registers i was making them green earlier i probably should have made this mid-green but a segment register has 16 bits 13 of them are a segment selector and then there's a global local bit and then there's the uh the current mode that you're in and so what's in that segment register so it's a pointer to the actual segment so what's in the processor is a pointer to what's in memory and if you look here what's in memory is a big table uh two different tables actually the global table the local table depending on which bits you've got here you look it up that gives you a segment descriptor and in that segment descriptor is a set of bits that sort of tell you where the segment starts in memory how long it is and what are its various uh protection bits okay and um if you're wondering why this is so messy by the way you take the the things of the same color and you put them together and that gives you the actual offsets and position of the segment can anybody guess why this is so messy uh not easier in hardware it's just messy it's it's not complicated in hardware but that's not the reason it's messy yeah because it's really messy to have to deal with it in software so nobody in their right mind would make something like this unless there was a reason and the reason is that they're trying to do backward compatibility with the original x86 processors even as they expanded them to 32 and 64 bits so messy but notice that uh the original six segments uh have this rpl which basically for the code segment tells you what your current privilege level is zero or three okay all right and the difference between cpl and rpl has to do with uh the privilege levels of the actual uh of the actual um descriptor itself versus what the segment register says okay now um how are segments used well there's one set of global segments for everybody another set of local ones that are per process in legacy applications the 16-bit mode um is is utilized and the segments are real okay they they actually have a base and they have a length and they do something helpful okay and they were originally not paged that way once we get to 32 and 64-bit mode what happens is what i showed you earlier which is that the segments are used to figure out what the linear address is and then that just goes through a normal paging scheme and modern operating systems there's this question was on piazza as well is really if you notice the segments at least the first four segments are all set such that the base is zero and the length is uh four gig which effectively makes them not do anything useful and the reason for that is that basically operating systems just don't bother with segments and they like they call that flattened address space okay and so you have to keep the segments there because the hardware needs them but essentially they're they're set up in a way that doesn't do anything right the one exception is the gs and fs segments are typically used for thread local storage and so every thread can potentially have a little chunk of memory that is unique based on its identity and you can do things like you know move gs offset zero into eax that's actually getting the zeroth entry in the thread local storage for that thread and that's supported by um it was originally supported by some of the new tools like gcc and it's certainly been part of uh modern operating systems for a long time the other thing that's interesting is when you get to the 64-bit mode uh the hardware doesn't even support segments anymore so even though they're still in the instructions in fact the first four segments have a zero base and no length limits and are unchangeable and so that that flat mode has been basically uh baked into the 64-bit hardware and the only ones that still have some functionality is fs and gs and that's because of the thread local store okay so you could almost say that segments are essentially unused in modern x86 uh operating systems pretty much okay except for the thread local store it's definitely would be faster to not have the hardware support segments but um if they're not used but the x86 uh several of the modes basically have them uh and so they need to support them okay if they were start from scratch and uh say they were building a risk processor like risk five which you guys are aware of you might not put segments in at all now what about a four level page table well here's a uh here's a typical x86 64. there's actually four uh nine bit entries the the um the the physical address uh page number is long enough that these entries actually have to be eight bytes long okay and so um that's why we have nine bits here instead of ten and uh and so to look up from a virtual to a physical address you actually have to look up four things okay so we're starting to get pretty expensive and then when we get to virtual machines which we'll talk about then potentially you double all of this and it gets even more expensive so for the x uh 8664 architecture here we go cr3 and then four references to get to the actual uh page okay and interestingly enough you can even have larger pages so if you look here let me back this up for a sec if you notice we take cr3 that gives us first level second level third level fourth level if you actually look in that first second third level there's a bit in the page table entry there that if we set it equal to one there's no fourth level and therefore we actually get two megabyte page out of this because the offset is 21 bits okay rather than 12. and we can also go even further than that if we set ps equal to one in this second level page table we can get gigabyte pages all right so that is a mode supported by the x864 and um these larger page sizes kind of make sense since memory is so cheap these days but the trick there is if you allocate really large pages and they're not used now you got internal fragmentation waste again and so these larger page sizes are typically used by things that are fixed always present and unlikely to be paged where the kernel is a good example or maybe if you're building a special operating system for something that's streaming really large items you might use some of these bigger pages but they're certainly available okay um what happens to the higher bits um that's a really good question and um i will show you that in a later lecture but the bottom answer or the simple answer is if you look at the virtual address here um in all of them the higher bits are all the same and what that means is either they're all zeros or they're all ones okay and everything in between where they're not all the same is a page fault okay and what that looks like in uh the physical address space is that you have uh a chunk at the top and a chunk at the bottom and a really big hole in the middle okay and that um that really big hole is is a is a permanent page fault that you can't map anything into and the reason that they do it that way is typically the things at the top are kernel and the things at the bottom are user okay and as you uh expand your hardware you can add more and more bits as you go now there was uh ia64 actually had a six level page table no too many bits uh this was basically an intel architecture that was uh designed for really huge machines and they were going to map all 64 bits but they didn't want to do it this way because there's way too much to look up and so the question is what else could we do if we're trying to build a table that's mostly sparse well we could build a hash table okay so all of the previous things we're looking at are called forward page tables because you take the virtual address and you peel bits off and you look up in the first level and then you peel off some bits in the second level third level fourth level and instead we can do an inverted page table which looks kind of like this where you take the virtual address and this virtual page offset and you look it up in a hash table and that gives you the physical page okay so um the advantage of this is now that this hash table is related in size i'm going to say of order size of the number of physical pages you have in dram whereas this scheme the size of the page table is related to the number of bits you have in the virtual address okay think that through even though we've done this good job of keeping things allowing things to be sparse by doing a forward page table the size of the page table is of order of the size of the virtual address space not the amount of dram you have whereas here this is of size the amount of dram you've got okay and so that's why inverted page tables have shown up in a few architectures over the years actually supported in hardware okay so things like the power pc the ultra spark the ia 64 that's one i just showed you all had inverted page tables supported in hardware and you know there's more complexity to it and so the hardware is a little more complicated and the page tables themselves don't have any locality because it's a hash table and so while the previous things we were showing you the pages can be page table entries can be cached in the cache here it's much harder with the inverted page table okay and what makes it inverted is really that we're taking the virtual address and kind of looking up the physical page rather than um taking the virtual or the way to think about this is this hash table is is got one entry per physical page whereas the previous thing has an entry per virtual address okay and that's why it's inverted it's it's what's stored in here is an entry per physical page whereas what was stored in the other one was kind of an entry per virtual page and uh whether it's faster or slower depends a lot on the architecture it's certainly potentially a lot faster than looking up nine entries it's certainly not simpler though so it's a question of simplicity of the hardware so the total size of the page table here is roughly equal to the number of pages used by the program in physical memory rather than the number of pages in the virtual memory so we can compare some of our options we talked about really simple segmentation that was actually an example of what was in the very first x86s before there was even paging that was before the x the 8386 you get very fast contact switching because you're just changing the segment map um and there's no page tables to go through so it's very fast but we got external fragmentation so we very quickly got rid of that and we put in some level of paging and the different schemes we've been talking about all have some advantages or disadvantages the simple paging was just uh they had no external fragmentation but the page size was huge and you couldn't have any sparseness in the in the virtual memory and then we talked about several different uh options here for um paging okay and all of the remaining ones other than simple segmentation basically all have the page as a basic unit of allocation and thereby we don't have that external fragmentation problem we did with segments so how do we do translation well the mmu basically has to translate virtual address to physical address on every instruction fetch every load every store those of you that remember 61c and caching will remember there was a lot of work done to try to make a load and a store fast by having first and second level caches what i've just done here in the way i've described this lookup so far is i've made that really slow again because yeah maybe my cache is fast but before i can figure out where to look in the cache i got to go to dram and look up a bunch of stuff in a page table and then i have an address which i can then look up quickly in the in the cache okay um the one the one um example where that's not true would be with a virtual cache but we're gonna talk about physical caches because those are pretty much what everybody has these days okay so what does the mm do on mmu do on a translation well in the first level page table it's got to read the page table entry check some valid bits and then go to memory second level has to read a couple of page table entries out of dram check valid bits and so on and level page table much more expensive and so um clearly we can't go to the um the page table all the time or we got problems we've just destroyed all of the cool cache locality we've been working with so um what do we do about this okay so where and what is the mmu so typically we have a processor and then we have the mmu between the processor and the cache and then we have a memory bus where the physical dram is so the processor requests read virtual addresses the memory system through the mmu to the cache and so we want to figure out how to make this thing fast when we make this request sometimes later we get data either back from the cache or back from the physical memory and we want to try to have the principles of locality work well on this okay so what is the mmu doing the mmu is well the simple thing is it's translating right from virtual to physical as long as it does that translation correctly we don't care how it makes itself fast so there's nothing that i've done so far in describing the translation as a tree of tables that requires us to go to the full tree of tables all the time as long as we can keep something fast consistent with the actual tree of tables okay so let's see if we can use caching to help okay so if you remember caching okay and this is uh this is a picture of me and my desk which you guys can't see because of my background right on my zoom cache if you remember is a repository for copies that can be accessed more quickly than the original and we're going to try to make the frequent case fast and the infrequent case less dominant so caching basically underlies everything in uh computers and the operating system i like to joke is all about caching everything okay so um everything's about a little bit about protection and dual mode but the rest of it's about caching so you can cache memory locations you can cache address translations you can cache pages file blocks file names network routes you name it you can cache it and the rest of the term is going to be about how we can use caching in clever ways to make things faster it's only good though if the frequent case is frequent and the infrequent case is not too expensive so that means that when i put something on that desk so that i can look at it frequently it better be the case that i am frequently looking at things on my desk otherwise i'm basically just wasting desk space it also ought to be the case that when i can't find something on that desk that it doesn't take too long to find it otherwise everything is just slow no matter how good your caching is okay and so an important measure which this is just reminding you guys is the typical average memory access time amat which is the hit rate times the hit time plus the miss rate times the miss time okay and that's uh that should be familiar to you so hit rate plus miss rate together add to one so for instance uh this is 61c idea right so the processor has to go to dram it's 100 nanoseconds all the time or if i put a cache that's one nanosecond maybe i can make this a much faster operation on average if i can put some uh the right stuff in the cache so the average memory access time which i just showed you is for instance like this if the hit rate of the cache is 90 then the average memory access time is point nine times one which is the time to get that out of the one nanosecond cache plus point one that's that ten percent where i miss it times 101. now why 101 well because in a situation like this i go all the way down to dram i pull it into the cache and then i do that last access out of the cache and so the um the final here is on average 11.1 nanoseconds as opposed to 100 nanoseconds so i've gained a lot by having a 90 hit rate okay if the hit rate's 99 notice that my average comes down to 2 nanoseconds 2.01 so the higher my hit rate the better i can do okay and the other thing is you can do the following you can say that miss time includes the hit time plus the missed penalty so when i miss it's both the time to hit which is the one nanoseconds plus the time to actually do the miss so that's why i ended up with 101 nanoseconds there okay now another region the reason to deal with caching is basically this right look at all of these lookups in various memories looking up things uh checking uh permissions et cetera we just got to do this somehow quickly okay and if we're if the irony of this is if we're using caching to make loads in stores fast but to figure out what to load and store we have to go to dram then that's ironic okay in a very big way so we want to have caching be fast enough that we get back our advantage for excuse me we want to make the the translation fast enough that we get back our advantage for caching and that's kind of where we need to go and so what we're going to do is we're going to use a translation look aside buffer or tlb to cache our translations and thereby make this fast okay and why does caching work well you know about locality this is 61c there's temporal locality which is locality and time that says basically if i access something now i'm likely to access it soon again right spatial locality says that if i access something i'm likely to access something close to it in physical memory okay that's spatial locality and spatial locality uh temporal locality is clearly because we have loops and all sorts of stuff where we tend to access things over and over again spatial locality is because we build objects that are in structures and so when you access one thing in a structure you tend to access the other one uh soon okay and so um we can look about at caches as an address stream coming from the processor that works its way through an upper level cache and then a second level cache and so on down to memory and we can start talking about what's the total performance we get by adding caching okay now if you remember the memory hierarchy this is a good example where we have registers that are extremely fast then we have level one cache which is uh quite fast um level two caches which is bigger but slower level three cache which is maybe shared on a multi-core system additionally slower and then main memory is even slower ssd is slower disc is slower but you notice that as we get slower we also get a lot bigger so that speed relationship between speed and size is really physics okay because that something that can store a huge amount of data is going to take longer to get at than something that can only store a limited amount of data all right and really we want our address translation here between the speed of registers in the l1 cache okay but main memory which is where our page table is stored is down here so there's clearly a problem right we're talking about things in sub nanoseconds versus things in multiple nanoseconds to get to or hundreds of nanoseconds to get to dram and so we can't have every access go to memory or we got a problem okay so now the time to access memory and the time to access dram if i made that distinction partially it's because kind of everything down here is sometimes maybe considered storage i'm not i don't want to confuse you much though so if i say memory and i don't make any distinctions i'll be talking about dram so i didn't mean to make that distinction for you sorry about that confusion so we want to just cache the results of the recent translations and so what that means is let's make a table that sort of goes from virtual uh page frame to physical frame and we'll just keep a few of them around so that we can be very fast and so really um this table which is a quick lookup table needs to be consistent with the page tables but it needs to be small enough that it's really fast so that we can work between the processor and the cache okay and that's the tlb it's really recording recent virtual page number uh physical page number uh or physical page frames or the same thing translations um if a lookup is present then you have the physical address without reading any of the page tables and you're quick okay this was actually invented by sir maurice wilks who is one of the famous luminaries for designing computer architecture he actually developed this thing before caches were developed and when you come up with a new concept you get to name it so if you're wondering why it's called a translation look aside buffer uh you know it's because he decided to call it that you get to name it anything you want and people eventually realize that if it's good for page tables why not for the rest of data and memory and that's where caches came from the question in the chat here about is the tlb stored on the processor um is an interesting one today absolutely this is part of the core there's the processor the mmu and the first level cache those are all uh tightly bound on the same little chunk of the chip even and i'll show you a picture we may not get to it today where i show you that originally the mmu was actually a separate chip back in the 80s and early 90s and so it's been getting closer and closer to the processor at the same time the caches has have been getting closer and closer to the processor okay so when a tlb miss happens the page tables may be cached so you only go to to memory so here's a another look at this so the cpu gets a virtual address that hands it to the tlb the tlb says is it cached if the answer is yes we go immediately have a physical address and we go to physical memory now here for sarah who asked this question earlier here this physical memory could be the cash backed dram okay so i'm actually explicitly not saying dram here but whatever we want to do here that's fast is this is uh our cache and dram okay and so if the tlb is cache and if the tlb is fast enough then we can get a virtual address go through the tlb quickly and look up in our cache and now we're we've scored right because that's fast if on the other hand it's not in the tlb then we have to go to the mmu and actually walk the page table take the resulting tlb entry stored in the tlb and then we can go to cash and the hope is that and then obviously if we're in the kernel and we're doing untranslated stuff we can go around the tlb the question is really is this caching going to work is there locality in our page translations and the answer if you think about it certainly instructions have a huge amount of locality right because you do loops the code is executed together so you got spatial locality so certainly for um for instructions this sounds clear stack has a lot of locality spatial locality so this sounds good and even data accesses they don't have as much locality but they do have enough locality to make this tlb work pretty well okay and so you know just because of what i mentioned earlier objects tend to be together in physical space and so that's going to lead to locality in the tlb and i'm going to remind you guys i don't think we're going to get to it this time but i'll remind you next time about all the stuff you learned about caches caches can be multiple levels there can be first level cache second level cache and so you can do the same thing with tlbs there's nothing that says that the tlb which is a cash can't have first second third levels okay and modern processors have multiple levels of tlb caching all right so what kind of a cache is the tlb well we can start talking about things like well it's got some number of sets um and the line size is the storage you know how much is in the page table entries and so we can talk about what's the associativity of these k this cash um et cetera all right and so this is uh where i'm going to remind you a little bit of some of the caching things that you remember so you might ask the first question might be how might the organization of a tlb differ from that of a conventional instruction or data cap okay and to do that we're going to start by remembering what causes cache misses okay and then um next time we'll talk more about cash structures but there's uh the so-called three c's which are actually from berkeley uh mark hill who's been a professor at uh university of wisconsin for a long time uh when he was graduate student at berkeley came up with the three c's and that was the compulsory misses capacity misses and conflict misses the compulsory misses are the first time you access something and it's never been accessed before there's no way the cash could have it because it's never seen it before that's a compulsory miss or a cold miss okay pretty much a compulsory miss you can't do anything about the best you can do is pull it in for memory or if you can pre-fetch that would be one way you might be able to deal with compulsory misses capacity misses are examples where you pull something into the cache but the cache is just too small and therefore um you know the next time you go looking for it it's not there conflict misses are cases where you actually have some associativity that's smaller than fully associative and that's an example where two entries overlap each other in the cache you pull the first one in you access the second one kicks the first one out and then when you go looking for the first one again you now have a conflict miss so in the case of compulsory misses the best you can do there is to figure out how to have some sort of prefetching in the case of capacity misses you gotta make your cash bigger in the case of conflict misses this is the case where either making the cash bigger or increasing associativity is going to be of importance and we'll we'll explore that next time okay i like to call this fourth c i like to say that there's three c's plus one the fourth c is a coherence miss which we will talk about a bit as well but that's an invalidation miss where you have multiple processors process a processor excuse me a core a reads some data core b writes the data that invalidates the data that core a had when core a goes to look at it again it's a miss and it's a coherence miss okay so um i'm going to leave it at that since we're running out of time but um in conclusion we've been talking a lot about page table structures which is really what does the mmu do and how does it structure the uh the mapping between virtual and physical addresses in memory and we talked about this notion that memories divided into fixed size chunks as being very helpful and that those fixed size chunks are pages and the virtual page number goes from virtual addresses mapped through the physical the page table to physical page number okay um we talked about multi-level page tables which is a virtual address is mapped to a series of tables and this is a way of dealing with sparseness and then we talked about the inverted page table as basically providing a hash table that was more closely related to the size of the the physical memory rather than the size of the virtual address space okay now we've been talking about the principle of locality reminding you about temporal locality and spatial locality we talked briefly about the three major categories of cache misses compulsory conflict capacity and then coherence for that plus one as you can imagine in the case of the tlb if we miss in the tlb that can be very expensive because we have to do many dram accesses potentially in a miss in the tlb so we have to be very careful to have as few misses as possible and that's going to lead us to higher associativity or even a fully associative cash okay and so when we talk next time about cash organizations like direct map set associated fully associative we're going to talk about high highly associative ones okay and we've also talked about the tlb this time which is uh a small number of page table entries are actually cached on the processor so they're extraordinarily fast it's the speed of registers and on a on a hit you basically have the full advantage of caching of the regular loads and stores being able to translate quickly and then go to the actual data cache or instruction cache on a miss you got to go and traverse the page table all right so i think we're good there um i'm going to let you guys go i hope you have a great weekend and next monday we will pick up with our a brief memory lane through some caches and then we're going to start talking about page faults and um interesting things that we can do with them so i hope you have a great day a great evening and a great weekend we'll see you next week |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_26_Optional_Key_Value_Stores_Cont_Chord_DataCapsules_Quantum_Computing.txt | well welcome back everybody to uh the last lecture 162. this is kind of a a special lecture um i did get some requests for more information about distributed storage and quantum computing and so i think we're going to do that and i want to make sure that we talk through the chord algorithm since that's a i think relatively simple thing to understand and is uh very cool and applied pretty much everywhere so if you remember one of the things we talked about uh last week was basically this cap theorem which was really a conjecture that eric brewer put forth back in the early 2000s and basically said that you could get consistency availability or partition tolerance you couldn't get them all three at once you might be able to get two of them at once and so that's the so-called theorem and we've talked through a number of reasons why that might be true but certainly you can imagine that if you have to be tolerant to cutting the network in half then it's going to be very hard to be both consistent and available all the time all right so oftentimes the cap theorem is a good way to understand global storage systems as a result now um at the very end of uh last lecture we were talking about key value stores and uh the cool thing about key value stores is they're very simple in interface excuse me so basically uh you can have an arbitrary key although that's usually a hash over some value um and you can have a value associated with it and if you do put a key comma value that goes somewhere into the ether and then when you do get of the key you get back the value that you started with and so this interface is extremely simple it's certainly an interface uh many of you have used in languages on a single machine what's interesting is if you use this in a global storage system it turns out that the interface is simple enough that you can have some pretty interesting um implementations okay and if you remember we started talking about key value stores with this notion of a distributed hash table where what i've got in yellow here is really the key value um table that we might think about on one node except that in reality what happens is this gets distributed over a whole bunch of nodes and so the question is really many parts to this question one is how do we actually do that distributing another is when some client does a get how does it figure out which node to go through clearly we don't want to have a single routing table in the middle of the network that's going to be really expensive and then you know what happens if one of these storage nodes fails okay and so there's many failure modes you can imagine there's performance problems and uh scalability issues where we would like to increase the size of the system by just sort of adding more nodes down at the bottom here and um so far we haven't really talked about how to even make that work okay and so today i want to tell you about the cord uh algorithm which uh has been turned into storage systems of many sorts including those used by amazon et cetera okay facebook so um before we get there i wanted to remind you of this notion of recursive versus iterative lookups so um here's an example of a recursive lookup which is like routing so what we're doing so our recursive our routing so basically if i say i want to get uh whatever key 14 has got it goes to the master directory and then that directory forwards it on it routes it to the particular node that's got the results and then the the node returns to the directory which returns back to the original client that's recursively routing its way through an iterative uh approach is one in which the client basically talks to the directory then they talk to the individual nodes and um we're not routing queries through anywhere every individual client is um doing that particular lookup okay and you can imagine that this second example here might be more scalable because we can have many clients all driving the lookups it gets a little tricky to maintain consistency though and so that's one of the reasons we might want to have recursive another is that this is just faster because we're basically just routing the shortest path to the results and back whereas this iterative one potentially um is twice uh the twice the latency okay if you think about this randomly so let's see keep those two implementation technologies in mind they're really interchangeable um and are more about what you do with the control plane portion of this than anything else so um the challenge of that central directory is really that it's got many entries that are sort of key value pairs or at least uh key node mappings and you could have billions of entries in the system and so um i would say that anything that thinks of a directory here like a single server is bound to be a bad idea okay because it's just not going to scale the billions of entries very well all right and back in the early 2000s myself and a bunch of other researchers started looking at how do you deal with peer-to-peer technologies as a way to solve this problem and the solution uh one solution here is consistent hashing which i'm going to tell you about um we did tell you about it last time but i want to uh reemphasize what it is and the idea behind consistent hashing is it's a way to take your keys and figure out a clean way to distribute them throughout the system without having to know pretty much all of the nodes that are participating so this seems like a strong ask when you think about it if you look back at this diagram for a moment if there's hundreds or thousands or millions of servers down here and we have to somehow um consistent with consistent hashing figure out uh which node to go to without going through a master directory and such that all these nodes don't know about each other that seems like it's pretty difficult and that's one of the reasons the chord algorithm is so interesting all right and so this is basically going to be a mechanism to divide our space up and we'll talk you through that in the next slide and then i'm going to show you how the chord algorithm lets you get by with only knowing essentially a logarithmic number of nodes in the total system and you can still do this well so we're going to associate each one of those storage nodes is going to get a unique id okay and that unique id is going to be in the hash space so imagine you take their i don't know their ip address and their owner and whatever you concatenate all those things together and you hash them and you get a single 256-bit id out of that now we're going to talk more about secure hashes a little bit later in the lecture but so every node has an id and it's going to be in this ring space this unit dimensional space from 0 to 2 to the m minus 1 where m is going to be big okay and so let me just show you the picture here so here's an example of the ring uh the ring to rule them all and for the sake of class i'm only going to talk about m equals six okay so really there's only 64 possible spots on this ring two to the 64 to the six i mean it gives you 64. in reality m is probably uh 256 okay because we're using sha 256 to do our hashing and so uh let's just say there's a lot more slots on here than 64. but let's use that for our illustration here and first of all notice on this ring are a set of servers that are they have their id that's been acquired by hashing their ip address and their name or whatever so this node here has an id of four and that means that we think of it as in position four on the ring this one has an id of eight we think about as position eight this one has an id of 32 we think of it as position 32. now hopefully what you can see here is these are not evenly distributed in fact probabilistically they're evenly distributed but they're really a random hash over um over some data that's associated with the node and so they're they're distributed through the ring but they're not equally distributed okay and it's going to be important um in fact that we have a good security hash here a secure hash that can basically pick these positions in a way that's uh hard for anybody to fake okay um and then once we've put these on the ring now what we're gonna say is uh take for instance node eight we're gonna say that node eight stores all hashes uh with keys from five which is just after four to eight and fifteen is gonna store everything from nine to fifteen and twenty is gonna store everything from sixteen to 20. so the way to think about this is we put a bunch of storage nodes on this ring and then we're going to decide where to store our key value pairs based on where the key is on this ring okay now there's a lot of stuff i haven't told you yet like for instance what does this mean physically well i haven't told you physically because since these are randomly hashed um these nodes are going to be spread physically all over the planet potentially the other thing i haven't told you about is how much do each of these nodes need to know about each other okay now so just to emphasize here so key 14 value 14 is going to be stored on this server and why is that because server 15 is the uh closest one that uh whose own hash or own id is is bigger or equal to the thing we're storing okay now i want to pause uh here and see if there are any questions and like i said in practice m is really something more like 256 and so uh this ring is really big and um these nodes are much more sparsely distributed around the ring okay questions we only have a very small class today so you guys are likely to get your questions answered anybody okay no questions all right should we move on now is a system that was developed uh with a group of researchers at mit and at berkeley um and you can think of it as a distributed lookup service uh and it's in my view i like to teach about it because it's the simplest and cleanest algorithm for distributed storage that i have seen and it's a comparison point for all sorts of other uh algorithms okay and the important aspect of the design space for cord is we wanted to decouple correctness from efficiency so we want to figure out what do we need about that ring and the storage servers on that ring so that this the algorithm i'm going to describe to you is correct and then we'll talk about how do we make it efficient okay and the thing that's interesting about chord is we're going to combine that central directory and the storage nodes together and spread them all amongst all the nodes and so we no longer have a single lookup directory and a set of storage servers instead we're going to have a set of storage servers that are just going to talk to each other to make this work and the properties uh are as follows so correctness we'll make sure that each node knows about neighbors on the ring so it needs to know how to go forward and how to go backwards a predecessor and a successor on the ring and as long as the ring is connected the ring is going to perform its tasks correctly and then from a performance standpoint then we're going to start adding some more neighbors and so we're going to start learning about a logarithmic number of neighbors uh across the ring and that's going to help us get a much more efficient lookup okay now there are many other structured peer-to-peer look-up services like this um tapestry is one that i worked on here at berkeley bamboo is another one i worked on there's pastry that was a microsoft uh product there's cademlia there's a lot of interesting ones several designs here at berkeley and so this problem of how to look up a key value pair got a lot of study in the early 2000s okay and let's look about the way to think about chords lookup mechanism is once again routing so it's going to be we're going to describe this in a recursive fashion to start with and then of course you can do this uh in an iterative way as well i think the recursive version is a lot easier to think about so every node in the system is going to know who its successor node is and so here we have an example where some client talks to node 4 and says here look up key 37 for me okay and so what's going to happen well we're going to start routing packets from the point at which we enter until we find the right until we find information about what the right node is that's going to store 37. and we can figure that out if you look ahead which we can't do if we're a distributed algorithm because we don't know about all these nodes but we're looking down from above and you can clearly see that no lookup for a t37 is going to want to get back node 44 because that's what's going to store key 37. why is that well 37 is going to get stored on the node that is the closest one clockwise uh on the ring okay and so 37 the closest one clockwise is 44. so how does this happen well 4 says well i don't know what it is so it routes to 8. it says i don't know what it is that's 15 rounds to 20 32 35 this point 35 knows that it's uh successor is 44 and so it just responds back and says hey i happen to know that node 44 is responsible for key 37. and at that point uh node4 can talk back to the client and the client now knows just to talk to 44. okay now if we wanted to be fully recursive we could have 35 pass a query on to 44 and have 44 send the key back that would be another option okay so if you notice here in order to make this correct so how did we find the first one again that's a great question so the answer is that any clients that talk to the storage server need to know at least one node in the system okay so that one node they need to know doesn't matter which one it is in this case the client which i haven't shown separately out here happened to know about node four and node four serves as a gateway into the ring all right did that answer that question now you can see that this doesn't seem very ideal because if i've got a thousand nodes that are storage nodes i may have to take many hops to find out uh what i want here um it turns out that the worst case lookup here is order n so that's probably bad but we're going to show you how to get login in a little bit it's going to be a dynamic performance optimization and so on that's going to be pretty interesting now what i want you to see though is from a correctness standpoint as long as every node knows who its predecessor is and successor and in this case just its successor then we can always find the server we're looking for okay now what does this really mean okay so here's this ring and here's you know key 14 stored on node 15 let's say what it really means is something more like this right so these nodes since we we're doing hashes over their ip addresses and some metadata it means that um they could be anywhere in the world and then we're connecting them together based on their hash name so fort talks to eight eight talks to 15 and so on so that um for instance key 14 happens to be stored here on the east coast node 4 is up in alaska so um based on what you see here that uh what i just showed you on the previous slide if somebody were to ask node 4 for key 14 we would go from alaska over to the east coast over to the west coast then we'd get the result okay so really because of the hash being a randomizing function uh we've scrambled the geography of this ring okay now that's actually good okay and the reason it's good is because it means that no particular part in the in the world here might be a hot spot it means unfortunately though that we don't have the most uh local of look up because if we start at node four it'd be nice if we could just go down to 15 and back okay now this is a really good question here about redundancy how do we get redundancy out of this for the moment uh suspend that question for just a second certainly we could put raid servers or what you know raid storage on each of these nodes and that would be great if the disks fail but uh we would like something even more powerful because i don't know if there's a big earthquake and california falls off into the ocean it'd be nice to know that key 14 survived somehow so in addition to the raid redundancy that we've been talking about in class there's some other sort of redundancy that we want here okay yeah by the way if you've ever seen the original one of the original um superman movies basically the the plot is the bad guy buys up a bunch of soon to be beachfront property in nevada and then has a plan to basically cause uh california to fall into the ocean and therefore have really expensive properties fortunately superman uh saves the day and it doesn't happen so um okay so if we move um forward with this by the way i'm showing you these clients now to make this a little more clear the clients need to know one gateway into the system in order to talk to the system okay so that's going to be part of the initial lookup and by the way that's pretty similar to what happens with dns you need to know how to talk to local at least one dns server somewhere before you can start resolving names okay so the first thing i want to talk about is how can we make sure this this ring stays connected even though nodes are failing and coming back okay and so how we can make sure it's connected is we're going to have this dynamic stabilization procedure so every node can run stabilize okay in which it asks its successor its current successor node who the predecessor was and figures out you know is there something wrong with who's connected to whom and then if it finds a problem it can run notify to help reconnect the ring okay so let's uh these are the kind of things that are a lot easier to see with um animations and so let's suppose that we have this ring and what i want to show you here for instance is here's a new a new node or it's a node that crashes coming back then suppose that what i want to do is i want to join the ring so what do i do well just like we've been talking with clients presumably what i know is i know one of the nodes in the system and if you remember this ring has nothing to do with locality that node could be i don't know 8 or 15 or something okay and so what i need is i need this new node needs to know one gateway node i'm going to say 15 just for the sake of argument and what are we going to do to join well we're going to send a join message to the node we happen to know about okay and what's interesting about this algorithm is all that the ring is going to do is it's going to figure out who is responsible for storing key 50 as if this was just a regular key value lookup okay and so we're going to work our way through and eventually 44 is going to say well i know about 58 here you go 58 is where key 50 would be okay and notice what we've done now all of a sudden uh no the new node 50 knows that it needs 58 to be its successor and 44 to be its predecessor so just by asking the ring where key 50 belongs it now has some information about nodes that it can talk to okay and so 50 starts by updating its successor 58 so now it's technically connected somewhat to node 58 but you know 44 is also connected to 58 so we now have a kind of a weird partially connected ring okay and let's look through what happened so node 50 is going to run stabilize and so it's going to talk to the successor that it knows about and ask it well what's who's its predecessor so when it does that what does it get back node 58 says oh your predece my predecessor is 44. okay and at that point now things are getting interesting because at that point um we can notify node 58 and say hey you know what i'm actually a person you should know about for your predecessor and um we can also uh take this connection at some point 44 is going to be running its own stabilize stabilize is running continuously 44 is going to ask 58 who it thinks its predecessor is and it's going to say well i think it's 50. okay and at that point what you know is oh 44 says something's wrong here so it's going to change its successor to 50 and then finally it's going to notify 50 about itself at which point 50 knows its predecessor and when all said and done we have the the node 50 has joined now what i want to point out about this joining operation i went through it pretty quickly but you're welcome to go back to slides and animate it through is really what happens is we have this continuous stabilized procedure that everybody's running all the time it's at they're asking their successor who the successor thinks the predecessor is and they just run this over and over again and what happens is the ring keeps converging into something connected and what i'm that will happen even if nodes fail and come back up and so on it'll converge to a connected scenario here okay and um but what you can think about pretty easily i think is if you lose two nodes in a row then what i've just described to you is no longer going to work so there is a way to completely break the ring such that the stabilized procedure won't reconnect it can anybody think about what the right thing to do there is in that scenario how do we how do we make sure that two failed nodes in a row can't prevent the ring from re reattaching itself anybody thoughts yeah perfect we need to know more than just one successor and one predecessor okay and so what that's called typically is the leaf set okay so the leaf set is multiple nodes if you look at any given node multiple nodes forward and backward called the leaf set as long as we maintain that leaf set then we can reconnect in a way that's going to be stable against all sorts of uh failures okay and one thing i posted last time i could move it to today i guess if you want for reading but there one of the original chord papers talks you through about how many of these leaf set nodes you need to make the probability of a permanently disconnected ring so small that you wouldn't care about it okay all right good now questions are we good now one of the things that i will point out is so far we still have this pretty um expensive lookup process which is order n now we have figured out how to make this stable so first of all as long as we have a fully connected ring we can always find the storage uh for this for the data and therefore we have a correct algorithm now um the question that's uh in the chat there is is a good one which is suppose that we had some key stored on node 50 and node 50 disconnects then all sudden key you know a key stored on that node or the set of keys stored on there are suddenly unavailable i'm assuming that's what you're thinking about and uh that's correct so we'll have to we'll have to fix that problem right now we're just interested in the lookup process of figuring out which node should hold our data we'll worry about making sure the data doesn't go away in a moment okay so oh okay um not exactly let's see oh i see if you have two if you have um somebody you mean to like disconnect every other one it turns out that that will uh converge pretty well especially if you have multiple links but try going through the the process and disconnect one and another one uh and skip one in the middle you'll see that uh pretty much what's going to happen is uh you can you can eventually send um let me think about that yeah so you you can you can eventually get this to stabilize and reconnect okay now the multiple the really strong part about keeping things connected is to have a leaf set with more than one node by the way though okay now um and what you should do is you should take a look uh take a look at that paper because they describe this in more detail but um basically what you want is a stabilization procedure that can work even when nodes uh several nodes in a row are failing and what you'll see is that that there's a way to do that as long as you have multiple links and what we're going to do right now for performance is going to make that even harder to destroy the connectivity okay so if you look here the question is sort of how do we make sure that we have better than order n okay and better than order n is uh the following what we're going to do is rather than just keeping track of nodes forward and backward what we're going to do is we're going to keep track of uh our current position plus uh 1 our current position plus 2 plus 4 plus 8 plus 16 and so on and what i mean by keep track of it is here at node 80 the question would be what node would store 81 well that would be 96 what node would store 82 well that would also be 96 what node would store 84 that'd be 96 at some point we get to what node would store you know 80 plus 32 so 112 well that would be 112. okay and we're going to keep track of a logarithmic number of these pointers and of course the way we find out about them is we just query the ring and ask it oh i want to store each of these keys and what will come back from the ring is which node is responsible the power the powerful thing about this is once i've got all these nodes now i can do a really fast routing process to figure out how to find which node is going to store the key i'm interested in okay and one thing that's very helpful here i think in this context for everybody is to think about this as bit correction so i am at a certain position and i'm interested in a certain key and what i'm going to do is i'm going to correct the bits one at a time using this finger table so my first my first routing hop is going to say well if i'm at 80 and i want to get somewhere over on the ring i'm going to connect i'm going to correct the bit i've got in this case it would be a 1 in the high point i'm going to turn it to 0 by taking a long hop and then i'll take a less long hop and so on and i end up with a logarithmic number of hops to get me to my destination and you can view that like i'm correcting the bits from my starting point to my ending point i'm correcting them one at a time by taking these various hops okay and that's how we end up with logarithmic routing time and furthermore this forest of additional pointers plus the extra pointers from the leaf set together make it really hard to be uh unable to reconnect the ring okay and so if you read that chord paper it talks about how you make use of all the information you've got to keep the ring connected okay questions okay we're good now let's think a little bit more about data okay so um so basically first of all we're going to have um more than one forward and backward link called the leaf set um and uh in the predecessor reply message node a can send its k minus one successors and so on and so you can see what's going to happen is during these heartbeat process of looking things up you know asking well hey my successor who's your predecessor during that process the stabilized process we're going to get back multiple nodes which is going to help us get a forest of connection connectivity forward and backwards and that's going to allow us to keep our leaf set uh as as correct as possible and i will point out by the way that these these links are really uh just an approximation of what we need so if it turns out i try to take a hop that's long and that node is down the one that this happens to connect to then that's okay it's going to connect to 20. i'll just take the next one okay and so i can always revert to taking the order n uh routing path until i've got some of these long hops available to me okay and so that that it's a very um it converges very nicely on a performance version of things but it can always fall back on the circular routing process if some of these fingers aren't correct and we just keep refreshing them over and over again and so there is a finger table look uh lookup process that just keeps renewing these pointers over and over again and the good thing about that is as new nodes come in the finger table adjusts as new nodes leave or as old nodes leave the finger table adjusts so that we keep ourselves with our log n lookup all right now um and you end up with really high probability even if half of the nodes fail so you can if you have log m where m is the number of nodes of the system you can end up with a situation where you can find data even if half of your nodes fail you can find data with the right number of leaf nodes and that's kind of what's proved in that chord paper and that's not that many because it's a logarithmic number okay so uh before i go on to uh storage fault tolerance for the data does anybody have any questions on this we good okay so um now let's look back at what we had a slide before right so we had key 14s stored on node 15. and the downside of this is of course that um the way i've described this first of all the only place for node 14 is uh for key 14 to be stored as on node 15. now if you look at the uh consistent hashing what it says is if node 15 weren't there key 14 would be stored on 20 right that's just the next node up from 14 since the only copy of key 14 it's currently stored on node 15 if 15 dies or goes away we don't have the data and so so uh it's fine that the consistent hashing tells us where it should be stored but we can't store it there because we've lost our data so we got to do something else here okay and the way we're going to do that is we're going to take the forward leaf set or you can do both forward and backwards up depends on the algorithm and what we're going to do is we're going to store 14 on the successive nodes that we know about because of the leaf set so we'll store it on 20 and 32 and now what's good about this is if node 15 fails which is the point that's supposed to store it we've already got a copy on node 20 and node 20 can notice oh 15 went down therefore node 20 can start the process of making sure that 35 gets a replica and we always have three copies in our leaf set okay so if we think of the leaf set as not only for keeping the ring connected but also for how we replicate then we can now come up with a dynamic process that automatically adapts as nodes fail by replicating on successive nodes and making sure that we always have a given number of copies in the system in addition to the ring being connected okay so if node 15 fails now what we'll do is we'll uh we'll add an extra copy to node 35. okay questions and the ring is going to stay connected because of our connectivity algorithm and so what's good about this is like i said you store the data in the cord ring and it it's very hard to destroy okay why are they called leaf sets that's a good question the reason they're called leaf sets is because uh in some sense you can view the uh if you take any given um starting node like 58 and you view the set of fingers that's a tree and so eventually you get to the leaf set and uh and so it's like a tree with leaves so that's where the leaf is coming from um and here's an example of that uh so if you look at what happens um in with leaf sets so i'm going to show you here is here for instance is a starting node i've got its leaf set is in green the the finger tables are in red and suppose that i'm trying to get from here over to here okay and so what i'm going to do is i'm going to start bit correcting so i might take a long hop first and now i can check all of the all of the fingers okay or i guess you could call them branches if we don't want to mix metaphors too much and the leaves and none of those quite have what i want so i'll follow one of these branches it's a little shorter and then eventually i'll get to a node that's this one where that node knows because of the leaf set which node is the one i'm looking for which is going to be the one that's just bigger counterclockwise or excuse me clockwise from the id i'm looking at okay and so this leaf set not only serves to help us with our replication but it also serves as uh as part of the last couple of hops we can use the leaf set to basically find who's supposed to have our data all right um now so let's look at replication from a physical standpoint right so if you look again at this ring i showed you a little while ago that ring is mapped physically to things that are spread widely and now we can see another big advantage of the randomness introduced in chord and that advantage is that these copies are actually stored in geographically separate places and so when the big one happens in california it's not likely to take out you know uh things over here in minnesota okay and so we um the randomness is helping us to avoid correlated failures where yeah we have a bunch of copies but they're all in the same machine room and the building got struck by lightning so that doesn't happen in a chord algorithm it's sort of geographically distributed by nature okay the downside of course is performance uh might hurt if you happen to be too far away from a copy um and so i will tell you that there are subsequent versions of cord which uh when you're doing this routing and you have a lot of options here see how we have many places we could go we can actually uh take places that advance us the furthest along the ring while keeping locality as short as possible and so we can actually take locality into effect to some extent in chord and um and make our routing less like bouncing back and forth across the planet randomly and more like working our way uh physically toward the thing that we're interested in all right good last but not least um and and i didn't have slides for this but i want to point this out one of the things we can do with chord is we can use chord to store locations of data rather than the data and so think of this like a dns built out of cord and so what the client does is the client doesn't know where the data they're interested in is they ask the cord ring the cord ring tells them who to talk to and then they can talk directly to them and exchange data over the shortest path possible using tcpip or whatever and so you can now get the best uh of both worlds and that you have very hard to destroy lookup process and then you can choose here's the client and it's using some data you can choose to replicate uh that data on close to the client and maybe a couple other places close to the client even though the initial lookup might be geographically separate once you start using the data and know where it is you can have good locality out of it and that's pretty much what we did with the the tapestry lookup process back for ocean store okay so i did want to point out that what i've just described to you this chord ring is actually used in lots of uh cloud services these days the idea at least so for instance dynamodb and i have a paper for that up on the reading from last time uses the chord rings and you can look down here but it uses them rather than spreading them around the planet it uses them within their machine rooms as a way to distribute load uh and so when you're ordering things from amazon and you're ordering you know you're putting things in your cart all of that data is actually stored in something like cord that's in a machine room okay and the applications uh because they're worried about people and uh not um pissing them off when they want them to buy things uh what the cord ring is really about is making sure that they can get their performance uh for retrieving something within a small number of nines okay and so the availability is an important aspect here and so um basically you have a service guarantee that says we'll get a response within 300 milliseconds uh for say 99.9 percent of the requests okay and so that's part of the way that the chord algorithms are adapted in uh in a read real cloud service okay all right and um notice that this is very uh in contrast essentially to what we've been talking about a lot of the rest of the term uh which is focusing on mean response time instead we want to have guaranteed performance okay and this is again thinking i want you to think back to when we were talking about we were talking about real time scheduling and what was important there was keeping the uh predictability of the scheduling time uh low or keeping the predictability high and keeping the timing tight rather than worrying about making it as fast as possible so s3 is actually using something slightly different but there's lots of different schemes out there what's good about the various things that are using chord like uh chord like algorithms is this is scalable as you can imagine if you don't have enough performance you can just start adding more nodes and it adapts automatically which is pretty good okay so what i wanted to do next uh i'm going to leave that there a little bit i want to talk a little bit about security and then um talk through a couple of things and then i want to uh try to get to quantum computing as well so we can i know there was some of you asked some questions about that so i'm going to leave this topic unless there's more questions okay so i'm going to talk through a couple of things that i'm pretty sure i'm assuming everybody kind of knows but i want to make sure we all have the same terminology so um you know security is an interesting thing it's basically computing in the presence of an adversary so i'm assuming several of you have all taken 161. um i don't know if that's true or not we have a very small class tonight but um you can start worrying about things like um can that adversary uh prevent me from making forward progress or can failure prevent me from reliability robustness fault tolerance etc security is kind of dealing with actions of a knowledgeable attacker who's really trying to cause harm and we want to make sure that uh they can't really screw us up okay and we talked about byzantine agreement uh a couple of weeks ago that's one example of trying to prevent a decision-making process from working but in general security is kind of dealing with situations where there's an adversary uh there's a security problem okay and there's been many problems okay where people have broken in to systems and um you know it's a it's a constant arms race uh preventing people from breaking into things you care about by using new techniques and the distinction between protection and security i think is an important one because protection is the set of mechanisms that we talk about in this class security is basically using those mechanisms to prevent misuse of resources so for instance virtual memory is a mechanism that can be used for protection security policy would be making sure that when we use virtual memory we don't let malicious processes or different processes owned by different people uh use the same memory and have a potential for screwing each other up so that would be a security policy built with our protection mechanisms okay so i wanted to point out something interesting i don't know if you've ever seen this before but here is a car in the ditch and what's interesting about this particular car in the ditch is that back in july of 2015 there's a team of researchers that took complete control of a cheap suv remotely exploited a firmware attack over the sprint cellular network and they basically caused the car to speed up and slow down and and veer off the road and uh totally wirelessly so this is a little scary uh to think about now fortunately no humans were harmed and the people that whose car was driven off the road were researchers as well but um you know this is something that one might hope our security policies could prevent and the thing that is getting in the way of preventing things like this is that there's an increasing amount of machine to machine communication where it's really machines are talking to other machines controlling each other and making sure that a malicious person can't get in the middle of that and cause unexpected behavior is very tricky and there's this term cyber physical systems which i don't know if how many of you have heard of that but the idea is computers controlling physical things using policies and algorithms and the problem is that if somebody manages to get in and mess with those cyber physical systems uh they can cause physical harm okay and that's a problem so part of this question is the following so let's talk about the data so there was firmware in this car so one might argue that one of the problems was that firmware was accepted as uh authentic even though it came from a malicious third party and so one of the questions that's important is do you know where your data came from that's a provenance question another is do you know whether it's been ordered or changed or altered in any way that's an integrity question and really this is a question of the rise of fake data which is kind of much worse than fake news which is about corrupting the data and making the system behave very badly so you know we have several security requirements that people talk about so authentication is making sure that a user who's making changes to the system is really who they claim to be data integrity is making sure that the data hasn't changed okay so that's important confidentiality is making sure that the data is read only by authorized users so that often involves encryption of some sort and then non-repudiation is a surprisingly important thing that people don't often talk about which is that if one sender makes a statement and they uh send a message or whatever they can't later claim that well i didn't really send out somebody malicious did and so that's basically making sure that you can't repudiate things that you've previously said and so i'm hoping that if you haven't taken 161 it's on your list because there's a very interesting set of things that people can talk about but cryptography is one of the central points of many of these mechanisms you just have to use it correctly and this is communication that's in the presence of adversaries uh it's been studied for thousands of years there's actually uh something called the code book which you should look up which talks about you know thousands of years of cryptography and the central goal has always been confidentiality about encoding information so an adversary can't extract it the general premise is that there you know there's a key and if you have the key you can decode things if you don't have the key it's impossible what's gotten more interesting over the years of course is public key cryptography where there's really two associated keys and you encode with one and decode with the other and that really leads to all sorts of really interesting authentication problems okay so basic uh cryptography which you've probably heard about is you have a secret key and you take the plain text and you encrypt it with the secret key and you send over the internet something called ciphertext which is encrypted and you can decrypt it the other side and um assuming that uh the key hasn't been leaked then it's not possible for an adversary and and you have a good algorithm like aes it's not possible for an adversary to send a message that the receiver will treat as real because you have to have the secret key now one thing you do need to do in order to make this single symmetric key encryption work which symmetric because the same secrets used at both sides is to prevent a adversary from holding on to an old message and sending it later is you have to start adding what are called nonces which are things like timestamps and so on so that every time you send this it's unique and if somebody sends an old version you can detect it but i'm assuming everybody understands the idea of encryption with uh with a symmetric key and the other thing is i mentioned hashes earlier and so the idea of a secure hash function is one where you take data and you run it through a hash function and you get a bunch of bits out of it and if you change the data even slightly you end up with a good hash function with something that essentially roughly half of the bits change so um this you know the change from fox to the red fox runs across the ice will give you something very different if you take fox and you add a few uh things after it it'll also change it uh drastically okay and so the hash we often talk about is the hash of a message is a set of bits say 256 bits and this is a good example of what we used on the chord ring where that ring was two to the m possibilities well that might be a uh the result of a hash function like sha-256 okay and what makes this secure is that it's not possible for somebody to come up with another source that matches the hash function okay that's one uh example of something that's not possible in fact it's not even easy with a good secure hash function to come up with two different items that you come up with yourself that have the same hash function okay and so that's why we can kind of use hashes as a proxy for the data itself and a lot of the things you hear about in in secure security literature using cryptography assume that the hash function is a is a reasonable proxy for the data itself okay so sha 256 is a good example so here we can for instance if we share a key okay what we can do is we can and that key is secret we can take a plain text something like a contract and we can run it through a hash function where we take that key and an append m and that's called a digest now we can send that across and the data and at the other side we can verify by re uh computing that hmac okay and if they match the one that was sent across versus the one that you computed yourself then you can know that the message is not corrupted otherwise it's corrupted and so we can use hashes to prove later that you know after the transmission has happened that the data is authentic okay so hashing is pretty powerful and i'm not going to have a lot of time to go through this with you that's a 161 topic but just you know keep that in your lexicon about hashing being a good way to ensure the integrity of data at the other side and so for instance in that firmware problem with the car we could have a key that only came from the manufacturer in a secret way and we could check the integrity of that firmware against the manufacturer and if it wasn't uh you know if the integrity wasn't high you know it was basically didn't match then we could know that that firmware is probably bogus and we shouldn't be using it okay now the downside of course of everything we've talked about is both sides share the same key and so if you leak the key then you got problems okay and furthermore you have to somehow share the key and so that requires you to go in a dark alley and you know hand the key over and so this seems like only part of the solution and um the interesting thing about that is this idea of public private key pairs and public key encryption which again the cool thing about that is that now you can distribute the public key let's suppose that somebody over here wants to have anybody send them a secure message they generate a public private key pair they give the public key to somebody else they pro they can broadcast it uh you know to the world and uh then anybody who wants to send a message just encrypts it with the public key and the only way to decrypt it's private key and that private key is something that i hold uh secret but the public key i broadcast and so this is basically uh this is the the basis for all sorts of modern algorithms okay among other things if i were to encrypt uh the hash over data and then uh encrypt it with a public key then i can know for a fact that that data has made it through um and only could have come from somebody okay so that's part of uh how we actually sign things all right so for instance here's alice and bob let me show you a fun algorithm here bob sends his public key out in to the wild to alice um now alice can encrypt messages and send to bob and only bob can decrypt him alice can send her public key and now bob can send things to her and what we're done with is now we've got a secure channel between the two of them given public information all right and now this is another mechanism we can build all sorts of stuff about okay now the uh the question about how to know whether this is a valid public key requires public key infrastructure but that's another story so now i'm i'm going i went through this very quickly how many people have never seen anybody uh never seen this kind of thing before or is this all pretty familiar okay good oh great this is in cs70 great so let's talk about a project that i've been working on so again you could view security as trying to protect things with a firewall or you could view security as it's all about the data and if you can protect the data then you can protect everything okay and so if you think about the internet of things really uh one way to look at the internet of things is that we have a whole bunch of devices and compute elements all over the world and it's really a graph of services that we want to connect and so distributed storage is everywhere every arrow represents communication we've got storage everywhere and really what we want to do is we want to make sure that the data can only be written by authorized parties and only read by authorized parties okay and these secure enclaves um are a topic for another day as well but this is a special um virtual machine that's in modern hardware that basically allows you to set up a secure channel and do some secure encryption in a way that um not even a um not even the local operating system can see the data okay and so if we have these secure enclaves stored everywhere and secure encrypted data then perhaps we can do some interesting things okay and we can do them securely and so um let me see i'm running low on time here i wanted to say something a little bit interesting here about why data breaches which we've heard a lot about in the last four years are so prevalent okay and if you look the problem is that people who are trying to build secure systems kind of think of it this way they say well i've got a secure network on the left i've got a secure cloud in the middle i got a secure network on the right and they're so secure that the only thing i have to worry about is securing the communication between these uh parties and if i do that then the system's secure okay and so this is what i like to call as border security rather than data security and so if you think well i'm going to put some firewalls and now i can say look this is a trusted computing base that's secure this is one as well there's one around the cloud and then you know the only thing left is cell phones which i make secure tunnels with and this just is fine um the problem with this point of view which you've probably heard about everywhere is the moment that we have any breaches inside the trusted computing base then all of a sudden not only is the data breached but somebody who is inside that firewall can produce data that looks authentic uh even though it's not because people are trusting well if it came through the firewall properly then it must be authentic and if you think back to cyber security you know um if you think back to what we've been talking about with that car suddenly that might be that you could have firmware that looks like it's from the manufacturer even though it's from an adversary and now we suddenly have this issue that physical devices that are trusting on this security suddenly start performing things they're not supposed to okay so the real reason we get these data breaches everywhere is because people think that they can put these boundaries up in a way that don't um can't be breached and of course we know that's not true and basically uh the problem really is not only are things breached but the integrity and the provenance of that data is not known so what do we do the data centric vision which is one that um i've adopted in uh my research group is one in which we think about shipping containers full of data so if you think about uh down the port of oakland you've probably all seen these shipping containers um this was a a great invention back in 1956 so before 1956 what happened was we had uh longshoremen who would take a bunch of things and they would go to a ship and they'd play tetris with it to try to fit all of these things onto the ship and then the ship would go to its destination and then there'd be people there that would unpack them and then you'd have to figure out how to put them on trucks and so on and it was a mess and basically one person that said well why don't we just make things that are all the same size and shape and then all of a sudden we've got ships trains cranes all of the the uh infrastructure for handling these things are the same across the planet and now i can ship something from my house in lafayette to beijing the outskirts of beijing just by calling the right trucks to come pick up a shipping container which gets taken to the port of oakland put on a ship and then it goes across the ocean and it's unloaded and and so on and why is that that standardization of the container so the idea that we've done in our group is to say can we use this idea to help in some way and the idea is very simply that we think of shipping containers full of data which we call data capsules and the reason i've got this little green bound around here is because it's a it's a data capsule and inside the data capsule is a bunch of transactions that are hashed so remember those hashes we talked about and signed where we uh we use a private key to sign a hash over something and as a result we trust that this really came from the person who said it did because only they could have the private key and as a result of these data capsules this gives us a cryptographically secure way of moving data around to the edge to the cloud and back again in a way that nobody can fake out okay another way to look at this is this is like almost a blockchain in a box okay and so what we're doing in our group is we're looking at how to take these data capsules make them a standard in a way that everybody uses them and on top of them you can build file systems and databases and everything you're used to but underneath the network knows how to ship these things around and route queries to them and so think of this again the underlying network is like the ships and trains and cranes and planes that handle this standardized metadata and what is the standardized metadata well it's a hash over an owner key and some other metadata about who created this and that forms a unique address that you can route to in our system unlike um not unlike an ip address but it's a unique hash over the data and you can imagine these things being small so they could be on phones or really large so they could be you know terabyte size databases but that standardized metadata is really what allows them to uh be shared uh securely across the planet pretty much okay so why is this idea help the networking effect okay i'm actually pun intended here so standardization makes it possible for the infrastructure to be put everywhere and it benefits everyone federation you can actually build a market of service providers the data becomes a first-class entity so your data basically can float pretty much anywhere so you could put a data a data capsule server in your house and all of a sudden your local data capsules could be stored there or if you're doing some communication with somebody else you could get a copy of their data capsules and again because it's like a blockchain in a box it's not possible for somebody to fake data that doesn't belong in there okay and so think back to that firmware issue with the car in the ditch okay and the other thing is that metadata we're looking at actually has details about what the network should and should not enforce and so you can even start talking about privacy where if you had a bunch of cameras in a local edge domain and they were taking a bunch of data and putting them in data capsules you could make sure that the network would refuse to route those say outside of the house or outside of your building if that was disallowed all right so the vision here is really the following it's you have a bunch of resources underneath these are spread in the cloud and through endpoint places all over the network you've got data capsules that have the ability to float anywhere they're allowed and that's what we call the global data plane okay so the global data plane is something that is uh spans the globe just like if you remember just like the cord ring spanned the globe okay it's again it's a peer-to-peer system uh just like ip as well that does routing and multicast it has things we call trust domains and accounting below you can have many utility providers kind of like comcast or at t that all provide service and above there's an api that allows all of these apps to access their data in the data capsules okay and so this is like me and my house calling a truck to pick up a shipping container okay and so this vision if this uh is ultimately complete is one where you instead of paying for ip service you'd pay for data capsule service and um you'd be able to store your data in a way that was secure okay and could be used anywhere you want and you'd own your data okay so physical view just one last little slide here and then i'll move on to to the uh quantum computing so i want to make sure we cover that but if you think about ip the way we talked about it briefly in a couple of weeks prior if you look at the physical view of ip there's a bunch of routing clouds and there's also transit providers okay and so this is exactly how we get ip working and so in the global data plane we do the same thing where we have global data plane domains we have routers that route global data plane traffic and they're tied together just like we get with tcp ip okay there are peering arrangements just like with ip and as a result and we have name resolvers that help us find our data capsules and as a result we could actually have uh forgive me building all this up but we can actually have clients which might be compute they might be little robots they might be smart cars or teslas can all tie into the global data plane and access data that they're authorized to do uh anywhere it happens to be and if they need high performance or privacy they can pull it to their local domain and so the physical view of the gdp is really um instead of thinking about packets you're now thinking about data and its integrity and its provenance okay so it's a it's a switch in viewpoint on how we want to be dealing with data all right sorry if that's a lot of information but i wanted to see if there's any questions there before i switch over to some quantum computing all righty give me a second i'll be right back and then we'll see if there are any questions that came up one moment okay so good so we have some good questions here so first question is how do we know the data is secured so um just like with a blockchain let me just back up to the picture here which i think is a is a good one to be talking about um what we know is the following the metadata is uh among other things the public key of an owner hashed okay and so all of these signatures have to be signed by the owner and anybody can verify that um the data that's in here was put in there by the right owner okay so that gives us integrity and providence it means that we can know that none of the data that's in here could have been put there by an adversary so that's the first um thing that we know and the second thing is of course we can put arbitrary encryption on top of this as well to make it uh private so really the signatures are about integrity and who put the data there and uh the encryption would be about privacy and there are many ways of uh deciding kind of which keys to use for that encryption and how to share them with people you want to decrypt okay and so that's the that's the um the security of this and uh what you know is when you get some data from the network you can immediately verify that that data is uh what it's supposed to be so you can you can check that the signatures are correct and if you have a signature only at the end of a chain of data you can essentially check the rest of the chain by checking the hash pointers so these are all of the things you get out of a blockchain by the way for those of you that are familiar with bitcoin or whatever you get it here with the data capsules and so i like to call these cryptographically uh hardened bundles of data and if somebody tries to put garbage in there a legitimate person who's trying to look at this can just throw the garbage out because there's no way that that garbage could have been put in there uh in a way that meets the integrity constraints of the data okay and so it's not forgeable um it's uh it maintains its integrity the the transactions can't be swapped or whatever and so it's a it's a unique umly uh high integrity kind of bundle of data and if you build file systems and what have you on top of this really what you're doing is you're appending data to that and it becomes a secure log on which you can build pretty much databases you can build file systems all sorts of stuff okay so this linked structure within the data capsule is really um is just think of this like git you guys are all familiar with git now so this is like a get tree uh with signatures and uh integrity through hashes okay and the signatures can't be faked again because only the proper owner knows the private key to produce the signatures okay and so if you breach the private key of course that's a problem but potentially every data capsule could have a unique private key which leads to an interesting key management issue but that could be another topic okay did i answer those questions so the vision here really is of pretty much everybody using data capsules everywhere okay and um if you can get that to happen then uh you know basically you potentially have um a very interesting scenario here now i just wanted to share uh another uh point here really quickly so for instance the way we're looking at our routing plane is really um the data plane itself is a is a series of routers that all know kind of where the data capsules are and have some very interesting properties where if any of you want to come work on this project come talk to me uh separately we have plenty of uh places we can talk to you okay now i'm gonna i wanna i promised you some um i promised you some quantum computing i did wanna show you one other interesting slide here potentially uh which shows you kind of an idea of how we can build things up um here so uh the data capsule infrastructure is it spans the globe kind of just like tcpip does and so in that scenario what you get is you get potentially these gdp switches which are just overlay network on top of ip you can have location services and storage services for storing data capsules you can have secure enclaves which lets you do secure computation and then you can have lots of uh interesting clients and part of what we're doing is we're working with roboticists and machine learning folks to put their data and their models for grasping and so on inside of data capsules and as a result they can reside securely in uh in the edge in say your robots or whatever in a way that can't be uh breached okay and so this is uh really targeted at uh secure edge infrastructure in addition to the cloud so these data capsules can move back and forth but certainly you need something like this on the edge because these pieces of hardware are easily breached and you want to make sure your data is is uh secured and unforgeable all right good so let me say a little bit about using quantum mechanics to compute and since there's only a few of you tonight if you're uh willing to hear me out i can talk for a little bit longer just to get through a couple of other things on quantum computing um but uh you know what does it mean to use quantum mechanics to compute so it's basically using weird but kind of useful properties of quantum mechanics two of them quantization and superposition i don't know how many of you have taken a quantum mechanics class but uh what you find out is for instance back in chemistry if you remember from chemistry you had the orbitals right and so only electrons were only allowed in certain rings or spheres actually around atoms and that was because of quantum mechanics that's quantization that says that the uh the electron could either be you know at one point the s equals zero or the s equal one or s equal two but nowhere in between and that quantization really gives us the ability to talk about something like a one or a zero so we've got the idea of digital data varied in that quantization but because it's quantum mechanics we can do the second thing which is superposition and this is having uh a bit which is both a zero and a one in certain fraction of uh between the two and that's where things get interesting okay it's like it's fifty percent zero fifty percent one or something in between that's called superposition and um what's interesting to me so i've you know designed computers in various times in my life is that most digital abstractions that you might learn about in 151 or pick your 141 some of those uh various vlsi classes is you're spend a lot of time trying to get rid of the quantum effects you want a zero to be a zero and you want a one to be a one and you want them to stay that way and it's when they don't stay that way that you got problems so then you put error correction codes and all that stuff that we talked about uh last week and the week before quantum mechanics however if you're willing to allow things to not be always a one or always a zero what you can do is you can just start doing quantum computing and that's basically using quantization and superposition to compute okay and so some interesting results just to tell you uh quickly here is for instance shore's factoring algorithm factors uh large numbers in polynomial time even though the best uh known classical ones are sub-exponential um in the number of bits and so you know if you could get a shores algorithm running on a quantum computer pretty much all rsa cryptography would be broken because you could factor okay so other interesting results here are for instance grover's algorithm um is is not as spectacular but it's still pretty interesting so imagine you've got an unsorted database of millions of elements okay so what is unsorted means if it's not sorted right so if you wanted to find a value uh what you know on average how many elements would you have to look through in a million items before you found the one you want well on average you'd have to look at half of them and however grover's algorithm using a quantum computer lets you find items in an unsorted database in a time that's a square root in n rather than half of n okay that's pretty interesting right um the other uh so uh 191 is mentioned in the chat that's a good class to take if you're interested in quantum computing um another one that's my favorite i think best application of quantum computing is what i like to call material simulation this was kind of the original uh the original application of quantum computing that was uh thought of and basically the idea there is if i want to design a brand new element or brand new material to build things out of and i want to take into effect all the quantum mechanic effects then exponentially i or classically i'd have to build something that was exponentially hard but it'd be linear time in uh in a quantum computer and so if i'm really interested in designing exotic new materials to build interesting things i probably want a quantum computer so there are many other algorithms out there now these days they've been slowly working on them but these are some pretty good ones that give you an idea why this might be interesting okay and furthermore we've got google we've got ibm it's very popular these days with big companies uh microsoft is in here as well looking at building these quantum machines both of these two both google and ibm are super conducting bits so these parts of the machine you see here are normally put into a doer and they're running you know at four degrees kelvin or something really cold this particular type of quantum computing technology is not going to be in your laptop at least not in any laptop i'd want to put on my lap but there are other types of technologies including ion traps that potentially are pretty interesting that there have been some thoughts over the years might be able to run at something closer to room temperature not there yet the current goal of google and ibm and and there's been some notion that maybe they've shown this is to do something which they call quantum supremacy which is basically to prove that there really is uh a possibility that quantum computers could be faster than classical ones and so the the issue here is that um these computers by being built by google and ibm have you know maybe have order 100 bits maximum it's very hard to do anything interesting with 100 bits but they're focusing on demonstrations that show that with those 100 bits they could potentially do something a lot better than a classical machine so that's called quantum supremacy so what i wanted to do just to give you a little flavor for quantum computing that you can go away with here is here's a here's a version of quantization that's particularly simple to get once you got it and that is there are certain particles so protons and electrons and neutrons those are good examples that are what are called spin one-half particles and physicists treat these things like they're spinning like a top okay except um what's interesting about that is that they can only spin with the axis pointing up or down nowhere in between okay that's the quantization thing and what i'm showing is this is the spin relative to a magnetic uh pole north and south and what's interesting about that spin upper has been done down is that now i've got zero and one okay so suddenly i've got binary that's interesting right and so these are particles like protons or electrons have this intrinsic spin and so now i got one and zero or up and down okay and a representation called the heisenberg representation looks at this uh messy physical situation like this which is either a zero or a one in these brackets and that represents spin up and spin down okay or vice versa depending on how you want to whether what it's looking like if you're with the field then that's a lower energy so that's probably spin zero it's probably zero now one proposal for building quantum computers from way back when was called the cane proposal and those spins were actually what you got when you um embedded phosphorus impurity atoms into silicon and then those phosphorus impurities would have a spin up or spin down that could be treated as one and zero and then you could actually use these electrons to manipulate okay and that was one of several sort of uh what i like to think of as scalable solutions built on top of silicon which are you know exciting because maybe you could get moore's law out of them and this is an example of something people were looking at okay but the temperature here was less than one kelvin which is really cool okay but let's suppose now here's where the quantum computiness gets pretty tricky okay and and uh bear with me just a little bit i know i'm going a tiny bit over here but um if you think of the zero and the one thing okay this is actually a wave function if you take quantum mechanics representing spin up and spin down and what's interesting is the wave function in quantum mechanics is actually a complex uh function um that i can add together c zero plus c one as a complex coefficient and all i need to make sure is that the um c zero uh squared plus c one squared is equal to one so what happens is these actually end up being probabilities that if i actually tried to look i would see a zero or if i actually tried to look i'd see a one and so what you see here okay with this psi function is actually a superposition of zeroness and oneness together okay now you know i realize this looks a little weird we don't normally get a wave function notation in 162. but um the thing that's uh like i said is very interesting about this is that this is a description of a combination of zeroness and oneness where the probabilities can be adjusted anywhere any way you want such that they their squares their norm squared adds up to one okay and if you measure the bits you actually said well do i have a zero or i have a one what's funny is you find out you don't have this thing because when you look at it you either find up or down with a given probability okay all right now bear with me so those of you that are skeptics out there would say oh really i don't know whether it's a zero or one so these these c zero and c ones really represent uh my lack of knowledge but once i finally looked i found out what it was okay i'm sure that there are several of you that think that that's the case um but that's one option the other is that this is a real effect in the proton or whatever we're looking at here is actually sort of in one state and sort of in another okay and those are those are two options and it turned out that there was there's a set of famous bell uh inequality experiments that were done that showed that reality is actually the second choice so in fact as weird as it is uh that proton is is a combination of zero and one at the beginning and it's only when we look carefully and force it to be one or the other when we actually try to measure it then it gets forced into a state okay and so if you think about this in terms of building a quantum computer there's a couple of interesting things here so one we got to make sure that the environment never measures before we're ready because otherwise we'll destroy this interesting superposition and maybe we need that to compute okay and just to get a better notion of how weird this really is okay so if i have a bunch of bits and bits together there's two to the n possible values of n bits you know that right but here's an example of a three bit example of a superposition in which there are eight options and i can simultaneously have one of those eight or many of those eight values in different proportions as long as the probability um sums up to one okay so as long as c zero zero uh norm squared plus zero zero one norm squared plus this plus this positive this sum up to one that's a real physical situation which represents a single register in my computer that has a superposition of all of those values and the moment i take a look then i force it to be one value okay and so if you only measure one bit for instance you can say that the first bit will be a zero with probability what well which ones have a zero in the first bit this one this one this one this one so the probability of finding a zero in the first bit is this sum probability of finding a one in the first bit is this sum and you can go from there okay so we really don't want the environment to measure this before it's ready so in fact we can have quantum era correction codes believe it or not which can protect this quantum information from being measured by accident by the environment and as a result really we can we can hold these quantum states for a long enough period of time to actually do something interesting with them so let me show you this uh simple two-bit state okay this is called an epr pair for einstein pradonsky rosen it was produced by einstein and pedonsky and rosen as a thought experiment and the idea is i've got two bits but i don't have all four options i only have a zero zero or a one one okay those are my two options and i separate the two bits so this is the two spins and in fact what i do is i maybe send one on a rocket ship to go light years away and so now these two bits represented by this state are light years away and if we measure one of them like let's suppose we measure and find that there's a zero back on earth we know instantaneously that we got a zero on the other side okay and so that looks like we had faster than light travel in fact instantaneous travel of information from the earth out to that redu that far planet einstein really didn't like this he called it spooky action at a distance okay but in fact uh what's interesting about this is you can prove that there's no actual information transferred okay so however we can use this to do what's called teleportation um which is take information uh at one side do some measurements send some data to the remote side and recreate the data recreate the quantum state at the other side that's called teleportation okay all right so i'm about to lose a bunch of you but let me just show you how you factor with this because i think this is interesting so the way we build a computer is we take a complex state like i just showed you and then we put it through a bunch of adders whatever you want to call it which are really all unitary transformations they're things that make sure that that probability always adds up to one and then we measure result and the output is our answer okay so basic computing paradigm you input register with superposed values you do a bunch of computing on it such that the probabilities are kept and you measure okay and the way it looks is that you take uh let's say you put an input with all possible combinations of the input input of the inputs being equal values all possible probabilities it looks like you're doing computation on all possible values at once but then when you measure you pick up exactly one and that's the answer you get okay and uh if you don't do anything very interesting here this is going to look like you randomly picked some input and computed on it so basically what we're talking about here looks like a random computation like you get in cs70 or 170 where you randomly pick an input you compute on it you look at the result so that's not very interesting right because we already know how to do that what we would like is we'd like it to be such that if you take this input state and you put an input that's a complex combination of possible inputs you run it through a quantum state a quantum computer and you measure the output is with high probability some answer that was hard to find that's what we would like okay and so if you look here um you know if the two n inputs are equally probable there could be two to the n outputs that are equally probable and what we'd like is the probability of the outputs to be piled up high on the answer we want and it turns out that something like fourier transform does the trick okay so if we can do a fourier transform on some input we can actually get an interesting output so if you bear with me i'm going to show you shores factoring in one slide so this is something that would break rsa okay we can basically say the difficulty of factoring rsa this is the type of cryptography that you might use with your bank across the internet is figuring out how to take a large number which is publicly known and factor it into two large factors that are primes and if you can do that you break the cryptography so classically this is an exponential time algorithm and so as long as these are big enough nobody's going to break it quantum computer can do it polynomial time and let me show you how here's how it is in a nutshell you pick a random x between 0 and n that's easy you say if the gcd the greatest common divisor between x and n is not 1 you just found a factor you win let's assume you didn't do that we find the smallest integer r such that x raised to the r is is congruent to one mod n okay so that uh you know basically is doing modulo multiply that's really hard to do well and easy if r is odd we got to repeat if r is even then we can say well i know because x to the r is equivalent to one mod n i know that x to the r over 2 mod n is equal to a and so then i can find this a minus 1 times a plus 1 is a factor of n and we have another failure mode here where a is equal to n minus 1 but if it isn't gcd of a plus or minus 1 n is a non-trivial factor so if we could somehow figure this out what r makes this equiv equation satisfied and we could do that quickly then um we win and that's something that uh you can't do easily classically but with a quantum computer what we can do and unfortunately i guess i don't have time to do this because we're running out of time but i can set up a situation where my input to my algorithm is all the possible k's uh if i take a bunch of values and i compute uh the the value x to that value and i add them all together as a superposition and i do a fourier transform what i'll find is that x to the r congruent to one as i have r go through all its possible values is going to be a periodic function why because if x to the r is equivalent to one then x to the two r is equivalent to the one x to the three r's equivalent to one and so i can do um i can essentially do a fourier transform in uh in a quantum computer i can get these peaks and that fourier transform will tell me what the frequencies are and that will give me that value that i need which i can do this i have to repeat this a polynomial number of times and then voila i've just factored that number okay so that's the essence of the shortest factoring algorithm and it all hinges on the fact that i can come up with this superposition state where it's all possible values of x to the k where k varies from 0 to n okay and i put them all together in a superposition state i do a fourier transform i get the result now the interesting question is is this something to worry about the answer is well so far no but it's looking like um it's getting closer and closer okay and so i would say there's been a lot of very interest interesting effort in quantum computing in the last five years enough that i did a lot of research on quantum computers um in in the early 2000s and and also the 20 up to 2010 2011. i think it's looking more promising now one of the things that we did do and we don't have time to talk about this but we actually investigated if you were to build uh that factoring algorithm and you could do it as quantum circuits that could run on a quantum computer what would that look like and we actually investigated ways of optimizing that and we could actually look at performance of different options for the shortest factoring algorithm as quantum circuits and so we built a cad tool to do that so um i i don't know i think it's a pretty interesting area right now and there's a lot of interest in it all right so um sorry i kept you guys way over but this is the last lecture i figured if anybody was interested we talked about key value stores we talked about chord um hopefully i gave you an idea about chord because chord is the root from which pretty much all the interesting peer-to-peer algorithms come and it's used in a lot of areas right now we talked about uh briefly went through some cryptography and then i talked about how data capsules are all about the data and that's a new model where the data can flow to the edge and in the cloud and back and um i think it's it's pretty exciting project we got working on it if anybody's interested in that and then we told you a little bit about quantum computing and uh feel free to come ask me or also look at 151 or 191 excuse me um which is an interesting class on quantum computing all right well thank you everybody sorry for going way over today thank you for those of you that stuck around and uh i hope you have a good uh finalizing of project three and those of you listening in cyberspace later as well you are all great and so i'm gonna miss you guys and i hope you have a wonderful holiday have a good rest of your semester and i hope you don't have too many finals in a week bye now |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_2_Four_Fundamental_OS_Concepts.txt | welcome back everybody um to the second lecture of 260 or 162 why i'm getting ahead of myself here um so what i would like to do today is uh dive right into the material and um let's try to keep comments during the actual lecture in the chat as actual questions and uh let's see what we can do so uh as you remember from last time uh we we're talking about what an operating system is and um you know i think i hope i kind of mentioned by the end of the lecture that it's really hard to say exactly what it is because not everybody agrees so we could kind of ask what it does and that's what we did here we talked about how an operating system acts as a referee illusionist and glue where the referee is managing protections on resources we'll talk a lot about that as term goes on illusionist is the uh the notion that we're gonna somehow make it look like we have a really clean set of resources that are much better than the axle ones are and uh virtual machine technologies help in this in some sense and then the glue is kind of a set of common services that basically make writing programs on top of an operating system much easier and things that you're very familiar with are things like windowing systems and the file system so if you also uh recall we started talking about for instance what the os might do for protection and what you see here is an example of two processes we're going to say more about them as we go on today uh that each think they have a machine that has all of the system resources to themselves and has a set of virtual resources like threads address spaces files and sockets that they can utilize any way they want and we can have more than one of these running at the same time on top of the same physical hardware and that's the job of the operating system and one of the things that we did talk about uh which we're going to bring up again today is this notion that program 2 which is in green here is not allowed to talk or otherwise observe modify process one's state not allowed to modify the operating system in ways that aren't allowed not allowed to modify storage unless they're allowed so this is part of the uh referee aspect of operating systems and we kind of said if process two tries to do any of these things then typically uh the operating system will object and you might get a segmentation fault uh with core dump or something like that where the process is essentially killed off so um the other thing that we started to talk about is that the the world of hardware is very complex and it's for very many good reasons and that world of complexity needs to be managed in a way so that we can still write programs that function properly okay and the um a good question that's in the chat right now was back on this previous slide of what a second instance of the same program get its uh get a separate process and the answer is yes so if you recall a process was an instantiation of a program and you can have multiple instantiations of the same program so um among the examples of complexity i just wanted to show you here we kind of talked about the sky lake uh processor series uh last time briefly and you can see that there's um the core uh chip itself is um directly connected to really high bandwidth dram channels and uh high speed graphics and so on and then there is this uh interface the direct media interface to what's often um used to be called the south bridge but it's basically the the uh chip that connects to all the rest of the i o and off of that we can have things like high-speed io devices for pci express we can have disks we can have slow i o through usb we can have ethernet hd audio pcie drives raid smart connect whatever and then there's an even slow interface off of that that gives us bios and all sorts of interesting things so really the reason that operating systems are so crucial is they provide a way to take this complexity and manage it okay now a slide i didn't really get a lot of time to talk about last time was this one which you can go to uh this link that i've got down the bottom here information is beautiful.net slash visualizations million lines of code and it kind of talks about millions of lines of code for different things that you're familiar with and if you look for instance one of the things that's a constant of the universe is for instance that going to a later version of something typically increases the size of that thing and the complexity of that thing tremendously so for instance from linux 2.2 to linux 3.1 if you notice that's a that's an increase of maybe a factor of six or more and the other thing to look at is things like cars which we take for granted are getting very complicated in terms of millions of lines of code so this is almost 100 million lines of code in a modern car that uh boy when you're traveling at highway speeds you want to make sure there aren't any bugs in that okay so we have a lot of complexity to uh deal with and of course uh yeah i have no idea how many lines of code in the 747 that's a really good question i would say a lot especially especially the modern systems which are essentially flown by computer pretty much if you think about ways in which the complexity leaks into the operating system uh what you see for instance is that third-party device drivers which are uh written by companies other than the os provider are often places where bugs happen and are the reason for a high fraction of crashes okay and um you know you buy a device it gives you a third-party device driver which you then install that device driver wasn't necessarily uh written all that carefully although microsoft over the years has come up with vetting processes to try to make things better so as apple so as the linux folks but you could think of a device driver as a reason to provide a nice clean interface and ironically that device driver interface is one of the things that causes things to crash holes in security models we'll talk a little bit later in the term about for instance spectre and meltdown these are the two symbols for it but uh all of a sudden in late 2017 early 2018 everything people thought they knew about securing data in a kernel turned out to be wrong because of the way that people were designing processors to do speculative execution and basically you could extract data directly from kernel space in many instances which is an issue the version skew on libraries can cause problems and that's one of the reasons that docker is so helpful and we'll talk more about that as we go on and then of course there's the invariable data breaches attacks timing channels uh et cetera okay and you know the question in the uh one of the questions in the chat here is sort of why are device drive drivers particularly vulnerable and it's really that uh they are the the part of the system that touches the uh most complicated hardware and uh basically they're trying to put a clean interface up to the software but what's inside of them is potentially very complicated okay and there's a lot of interesting things that you can get just by googling for instance and i encourage you all to do that we'll talk more about device drivers when we get later in the term so the operating system is really trying to uh help abstract the underlying hardware and tame complexity all right and so you could think of there's hardware uh underneath the operating system is in between uh to provide a clean abstraction which we'll even call a virtual machine abstraction to the operating system okay now the question about how do we quantify uh reliability and so on there have been a number of attempts at that it's been hard but if you actually look at people that have measured the root causes of a lot of crashes um something upwards of 50 or 60 percent of them at one point in time were actually attributable to bugs in opera in device drivers which is uh pretty spectacular so the way we uh deal with this abstraction uh mechanism here or this virtualization mechanism is again we're providing various resources that are better than the hardware versions so instead of processors we're going to provide threads talk about that today for the first time instead of memory which is a bunch of dram pieces we're going to provide address spaces instead of disks or ssds which have blocks we're going to produce a file system instead of networks we're going to have a nice clean socket interface instead of machines uh we're going to have processes all right and this is going to be a an ongoing discussion throughout the next several weeks where we talk about how the operating system virtualizes the hardware pieces to give you a much cleaner environment the bios was asked about in the chat is typically a way of providing a set of standardized services on top of hardware and part of the bios is a legacy to old days in ibm pcs but some of it also provides firmware that can get updated and help the hardware be a little bit more reliable thereby making the operating system's job better okay and yes device drivers run in supervisor mode which is one of the reasons except for micro kernels which we'll talk about later in the term as well so the operating system as an illusionist is really part of our topic today we're going to talk about the four uh interesting uh functions of the operating system that really are leading to this illusionist idea and we're mostly going to work on the thread and process uh concepts today okay and um basically as an illusionist the os is going to try to remove hardware software quirks as a way of fighting complexity and optimize for convenience utilization and reliability to help the programmer all right and for any os area you pick it file systems virtual memory networking scheduling etc you can ask the questions of what hardware interface do you have to handle and what software interface are you going to provide and oftentimes the hardware interface talks about a set of mechanisms that the operating system exploits to provide a set of uh clean mechanisms and policies up to software and we'll use that terminology as we go throughout the term okay and um yes it is true that complexity is a very hard thing to measure and to uh talk about in a quantitative sense but certainly you know what it means in a qualitative sense and it's that qualitative sense that tends to get in the way of people knowing that their systems are going to function properly now so today we're going to basically talk about four fundamental os concepts we're going to start with talking about what a thread is and a thread is going to fully describe a program state or a linear execution context it's going to have a program counter registers execution flags a stack etc this is going to be very familiar hopefully to all of you uh based on 61c and then we're going to now then move off into address spaces either with or without translation which are a set of memory addresses accessible to the program and we're going to talk about how with the right mechanism we can provide a much cleaner behavior than the underlying hardware and then we're going to introduce what processes are and finally we're going to talk about a particularly important hardware mechanism for the early parts of this class which is dual mode operation which is the fact that a typical processor has at least two different modes which we may loosely call kernel mode and user mode and we exploit that to give us our better virtual machine behavior okay yes so um moving forward now so what's the bottom line we're going to run programs and that uh we're going to learn how to write them and compile them so you guys get to do that uh right away with uh with uh homework zero and um project zero starting tomorrow and then once they've been uh written then we're going to talk about how things get loaded into memory okay after their executables pulled off the file system it's loaded into memory we're going to talk about the stack and the heap getting put together for that particular process and then we're going to transfer control which is really means the program counter of the processor is going to be pointing at instructions in the user code of that process and then the execution will start okay and uh and then of course the operating system is going to provide services like file system and so on to the program and uh and it needs to do all of these things while protecting the os and the process from other processes okay and other users all right so fred's uh can be thought of in the following way and uh we'll talk a little bit later about uh threads and their heaps but if you look um for instance here back in 61c you got to learn about processors and the processor was something that started out with having a program counter as you recall and a memory that it could read and in that memory was a set of instructions okay and so that program counter would point into the memory and allow the processor to fetch the next instruction if you all remember so we'd pull the instruction in from memory we would decode it and then we would feed it to the execution pipeline which uh the one that we often talk about 61c is the five execution pipeline for a risk style processor and after things were decoded they would feed a set of registers and enalu to do actual operations and uh execute as desired and at that point you'd go on to the next uh instruction and so on and increment the program counter okay and so um this is hopefully familiar to everyone from 61c and if it isn't i would suggest that you guys go back and review a bit but let's talk a little bit about our virtualized version of what we said there so our first os concept is going to be a thread of control and a thread is really a single unique execution context and it's got a program counter registers execution flag stack memory state and so now all of a sudden you're going to say well wait a minute is that just what you had on the previous slide okay and if you look uh yes what you learned about was a very simple uh fetch execute cycle in 61c once we get to uh something we want to provide to other people to users in particular we need to virtualize it and so the thread is going to be like a virtualized version of your 61c processor and a thread is executing on the processor or core so by the way i'm going to intertwine processor and core until we uh can make that a little bit more clear later but it's executing when it's resident okay and the processor registers so we may have many threads but on a given core only one of them is resident and has control of the program counter and registers at any given time okay so what does resident mean let's just be very clear so the registers have the the context of the thread or the root state includes a program counter currently executing instruction the program counter is pointing at the next instruction in memory and all the instructions are stored in memory the resident state includes intermediate values for ongoing computations so typically once you get into pipelining which you started to learn about 61c there's a lot of pipeline state involved in an ongoing execution if you're really interested in that i would highly suggest something like 151 or 152 where you learn a lot more about interesting pipelines and speculative execution we'll be talking a little bit about that throughout the term but that's more of a hardware architecture class but um resident also means that there's a stack pointer that has a pointer into memory which is the top of the stack and um pretty much the rest of the thread is in memory so there's some things in the registers that and the rest is in memory okay and you'll see how that looks in a second so a thread is suspended or no longer executing when it states not loaded in registers okay so it's kind of the opposite of resident and at that point the processor state is pointed at some other thread so the thread that's that's suspended is actually sitting in memory and uh not yet uh executing or not executing at all while something else is executing so uh program counter is not pointing at the next execution from this thread because it's pointing at the execution of the current thread okay now uh here is another view of what happens during execution this is another kind of 61c view if you look here here is uh the set of addresses which we're going to call an address space a little bit later in the arc uh in the lecture that um from 0 to 2 to the 32nd minus one and what it has in it is a set of instructions that are going to be executed and what you see in pink here is your processor okay and um let's hold off on questions about where the state's stored when the thread's not running the way a thread is different from a process if you can give me a few more slides we'll get to that as well but it's basically the process has a protection state associated with it so if you look at the set of registers and the pipeline that's the processor and this might be the currently running thread which means our execution sequence fetches an instruction at the program counter decodes it executes it writes it back to registers grabs the next instruction and repeats so this is a wash and repeat kind of scenario and so for instance the program counter might start at instruction zero and then goes to one and two three and four as we're going and this in essence is what it means to execute okay this is the the basic uh von neumann machine that we're all very familiar with uh that you learn about in 61c and that we're going to take for granted because we're going to put an operating system on top of it the one thing that's going to be a little different from uh what you're used to is rather than risk 5 which is kind of what they do in 61c we're going to use a more common processor called an x86 uh which is the intel processor and probably all of you who have laptops these days all have x86s on them um i understand that uh apple is basically punting the x86 in some of the upcoming generations but we're probably all using the same processor and the set of registers are a little different from the risk 5 processors you're used to so there's smaller number of execution registers but there's also a lot of other things like segment registers and so on which we'll talk about over time but if you notice on the left here we have risk 5 which had say 32 registers associated with it on the right we have x86 which has a much smaller number of registers that you can actually execute on and then a bunch of other state now the question about what an execution flag is came up in the chat and a lot of different processors have the following uh associated with them if you subtract two uh registers to get a third not only does it give you the result of that subtraction but then a set of flags gets set like did when you subtracted those two registers was the result zero so that's like the zero flag or was it greater than or less than zero so there might be a greater than flag so those flags are then subsequently things that you can actually make branches on branch decisions on so you might branch of equal and the way that's going to work is with an execution flag so um take back take a look in some of the supporting things that we have for you on the resources page and i believe in a section maybe on friday they'll talk a little bit more about the x86 as well but uh you're going to get very familiar with x86 okay so the question is are execution flags like the control logic from 61c the answer is no so think of uh execution flags are like a bunch of little one-bit registers that hold some of the comparative results of what you just did okay so they're they're they're tiny result registers and you can save and restore them uh during a contact switch on certain processors and if you look here see the e flags so those for instance are some of the flag registers that represent the results of execution so how do we get the illusion of multiple processors we talked last time about you know doing a psa ux or some other uh task manager on your laptop and uh if you look you'll find that there are hundreds of processes that are just running uh mostly sleeping but they're all available on your current processor and so how does that work so for the next uh i will say couple of weeks let's mostly assume that a physical processor has only one core on it or one thread of execution in the hardware at any given time and we will we will um graduate to multi-core processors a little bit later but for now what we've got here is we want to have multiple cpus or the illusion of multiple cpus running at the same time so we can have multiple threads running at the same time we're going to have them all share the same memory so that the programmer's view is well i just have a bunch of things running and they all share memory okay um and the uh question is kind of how do we get the illusion here and this is not complicated it's kind of what you would think we're going to multiplex that hardware in time okay so threads are virtual cores and uh what i show you here is assuming again for a moment that we have only one processor or one core in the system then the way we get the illusion of magenta cyan yellow running at the same time is we just multiplex we run from cyan's thread for a little while and then from magenta's thread for a little while then cyans then yellow and so on and we repeat and so over time we get this multiplexing of the same physical hardware okay and um the contents of a virtual core or thread is what well clearly each one of these virtual cpus needs to have a program counter and a stack pointer and uh all of the register state that we're used to if we were running that thread on a single processor in 61c okay so there's registers you might ask where is it the thread itself well if it's currently executing so for instance if we're in a period of time where magenta is running then it's in the processor itself and when it's not running it's saved in memory that state is called uh thread control block or tcb okay um now let's continue on this illusion for a moment here so consider this multiplex view and time so at t1 uh vcpu one is running a t2 uh cpu uh the blue one is running so what happened between one and two anybody want to hazard a guess so good so what happens is a contact switch is the high level answer um the low level answer would be some event okay so the os got to run somehow between uh pink and blue all right and what happened during that contact switch is we saved all of magenta's uh pc stack pointers all the other registers in their thread control block in memory we loaded the pc stack pointer et cetera from vcpu to and we returned to run the the cyan one okay so um interesting questions here that are coming up so first of all one question was does each uh thread here get its own cash and the answer is no okay so typically in general it's no typically uh there's one cache per core and so uh they're all kind of sharing the same cache so as you could imagine if you switch too quickly then nobody gets advantage of the cache okay and um yes the cache or the tlb uh in a primitive processor has to be flushed when you switch more advanced ones it doesn't we'll talk more about that as it goes on uh the cache itself is typically in physical space and you're switching from one thread to another uh you just change page tables and so you don't actually have to flush the cache and we'll get we'll get into that you guys are way ahead of me on this um another question is how long does this take well this can take uh something of the order of a few microseconds and um you want to make sure that the time to switch isn't uh so long that you're spending most of your time switching okay that would be a thrashing scenario that could be pretty bad okay um so the other question which is great you guys are on top of things wouldn't it be better to say run the magenta one to completion and then pick up the blue one so that would be a yes on efficiency but not so great on responsiveness okay because the the poor task that's trying to run in that cyan or blue thread wouldn't get to run for a very long period of time potentially okay so we're going to talk about those issues when we get into scheduling okay so you can already see how you're all asking the right questions there's some very interesting ones here okay but let's move a little forward here um what triggered the switch well we've already said things like a timer went off or a voluntary yield we'll learn about that uh very soon where uh the magenta one maybe decided to ask the operating system to do some io at which point the os said oh okay let's uh let's schedule somebody else okay and uh the question about how many registers there are is going to depend vastly on which processor you've got so yes there are 32 integer registers on a risk five which you guys are used to and yes there are some floating point ones as well on an x86 there's a much smaller number of registers and so when you don't have registers in the processor you got to keep things in memory so you spend a lot of time going back and forth so good questions so um now that we've started talking about how to get the illusion of multiple things we can start looking at uh what this model gives us and the model gives us the following we may have a bunch of memory that's in blue here and we could think of each one of these virtual processes you know green yellow orange has their own stack and heap and data and code and they're all laid out in memory somehow and what we need to do is somehow keep track of where everything is okay so the thread control block is that where everything is so when we switch from green to yellow the first thing we're going to do is save out all of green's registers into its thread control block which is by the way in part of the kernel memory which i'm not showing here okay the question about in-flight instructions that's in the chat's an interesting one so typically what happens when you get an interrupt is you end up flushing the pipeline so mostly in-flight state is all squashed when you switch okay we'll talk more about that a little later too where are the tcb stored they're stored in memory for now we're going to say they're stored in the kernel i want to say for now because we're going to talk about user level threads and some pretty interesting things in a couple of weeks excuse me but for now let's assume they're in the kernel and if you're you know you're start working on pintos which by the way you are we're going to release project zero tomorrow uh you should take a look at thread.h and thread.c right away you'll start to see how it is that pintos which is the operating system we're using for the projects implements threads okay all right so let's talk about some administrivia you should be working on homework zero okay it's due thursday already okay uh and you know the um the reason for homework zero is really to make sure that you have uh experimented with everything and you're ready to go so you get to experiment with gdb you get to experiment with compiling you get to work on your tools okay you get to learn about git if you're not sure about it you get your virtual machines up and running okay and so um we're gonna have project zero up tomorrow i know originally it wasn't up until thursday but we're pushing that a little forward project zero is a chance for you to really get going on the pintos projects and it's intended to be done on your own so do not do this with potential partners do this on your own and it's really about everybody who's going to participate in a group learning some basic things about how to run the projects okay and so again i suggest you get moving on that right away as well i did want to say something about slip days on projects and homeworks this term because of the you know because of the virtual nature of this class and things are complicated and difficult to get moving we're uh we're upping the number of slip days to four for both homeworks and projects uh but when you run out of slip days and you don't get any uh credit for things that you're slipping on so i would suggest that you bank okay uh the bank your slip days don't use them up right away um because you may want them later in the term okay tomorrow is uh an optional review session for c and uh actually i think we're billing it as for a bit of c plus plus as well uh there's a zoom link that's going to be announced um it may be already up on piazza uh it will probably record it um but i would consider attending uh and in that we've got people are gonna go over some of the basic things about c that you're gonna wanna know okay uh c plus plus is not really required for this class uh in answer to the question that's in the in the chat there you're really going to use c okay but uh it doesn't hurt to look at what we've got for c plus plus it well as well not really required means that the the work you're doing in pentos is in c um and uh friday that is four days from now is drop date okay so it's an early drop date class and it's very hard to drop afterwards so if you're not interested in the class you should uh drop sooner rather than later so that people can be pulled off the wait list okay and the thing that you need to do is if you have friends who are either on the wait list or in class but they haven't been doing any work there are in danger of being put into the class without them knowing okay and you may you may think that that's ridiculous but it happens every term somebody wasn't paying attention and they end up three quarters of the way through the term and discover that they're in the class and they can't drop okay it's very hard to drop late uh without burning one of your you know your one and only late drop date so for late drop class so uh just try to make a decision on that okay any questions on that part okay it is true that berkeley does not uh have a mainstream c plus plus class uh in the computer science department there's lots of great languages out there uh and so uh somebody who knows c uh is at least i would say a third of the way to c plus plus uh but uh you may end up learning c plus on the fly for other classes like 184 for instance okay so just to remind you from last time you know this virtual class that we're in here is challenging and uh for everybody okay and things are um considerably different uh starting off remote not even starting off uh physically and so that means that we've got to figure out how to re-establish uh the people interactions and collaborations that we would have if we were in person okay um how do you recover collaboration without direct interaction that's going to be challenging and so i'm asking everybody here i'm putting out a plea to do your best to talk to people more than you would in a real term okay you gotta have more meetings uh drink coffee with your friends on zoom more often or with your uh groupmates okay this is important um and you gotta figure out how to bring people along virtually okay it's very easy in this world where um i heard umesh bazarani described what we're doing right now as we flatten the world graph okay so everybody's equidistant from everybody else and as a result nobody's close to anybody okay yes i've become a flat earther for uh with respect to this class cameras as i've mentioned before are essential components of this class so what we're trying to do is uh make sure that people maintain their interactions okay and we're gonna need it for exams and discussion sessions design reviews et cetera okay and um have a camera plan to turn it on and let's try to keep that human interaction going okay um the uh we need to bring back personal interaction um humans are not good at interacting via text only you can kind of see what happens with twitter in the public life is is really not a great thing and so let's do everything with uh in person as in person as we can get with a camera interaction okay and uh you're gonna have required attendance at the discussion sessions design reviews et cetera with the camera turned up okay now uh the other thing i wanted to remind you guys of is the collaboration policy all right you gotta um you know if you're explaining a concept to somebody that's okay if you're discussing algorithms or testing strategies with other groups that can be okay if you're discussing maybe debugging approaches with other groups but kind of at an abstract level that's okay if you're a large allowed to do searching for generic algorithms like hash tables okay these are all okay what you're not allowed to do is uh share code or test cases with other groups okay and we track that okay we have a mechanism to compare people's code with other people's code from earlier terms and in the class and so on just don't do it copying or reading another group's code or test cases just don't do it copying or reading online code or test cases from prior years or other members of your group just don't do that okay uh helping somebody to debug in detail in another group don't do that either okay because what happens is um if you know you're helping somebody debug you're now not only looking at their code but you're importing your code kind of conceptually into their code and we have had situations where debugging essentially caused the person that was being helped code to to uh match against the other code and both groups got in trouble so just just say no okay um we're uh compare all project submissions against prior year submissions and online solutions and so this is it you know just just don't do it and also don't put a friend in a bad position by asking for help that they shouldn't give you okay all right good now are there any other administrivia questions nope okay um now let me see there was a question oh there was a good question uh in the in the chat from before i started the break which was you know why not uh just kill off one stage of the pipeline at a time you know it turns out oops sorry that um it's very difficult to uh restart pipelines uh if you try to save some of the state and restart it that's called precise exceptions uh the question of precise exceptions if you can save part of the state and restore part of the state that's an imprecise exception turns out that gets complicated very rapidly it makes getting correct os code really hard so in general they don't do that okay and i will mention that a little bit later when we get into page fault hammer page fault as well okay now so uh if people could turn off their cameras during class that would be good please so uh the second os concept we're going to talk about today is address spaces okay and um the simple idea is that it's the set of accessible addresses and the state associated with them so if you think back to 61c you've got uh let's say a 32-bit address from zero up to fffffffffff and this is the view that a processor has of memory now that's not to say that there's uh dram in all of these spaces it's just that this is the processor's view of what addresses are available and so for a 32-bit processor by the way i'm gonna make you guys all know about powers of two so if you don't know them yet you should start learning them um but two to the 32nd for instance is four billion approximately okay 10 to the ninth to the 64 is 18 quadrillion so that's a lot of addresses okay but um if you think of the address space as all of the the potential places that the processors could go and then there are ones that actually are backed by dram then there's some state associated with them and the question might be well what can you do when you read or write to an address well perhaps it acts like regular memory or perhaps it ignores the write entirely or perhaps the system causes an i o operation to happen that's called memory mapped io or perhaps it causes an exception it's possible if you try to read or write somewhere in the middle between the stack and the heap that i'm showing you here and there's no memory assigned to that process you get a page fault okay or maybe the act of writing to memory communicates with another program okay so um there's a lot of uh a lot of possibilities here okay now uh so am i saying oh i meant quintillion there didn't i okay thanks for the catch uh so um in a picture i'll fix that slide by the way so in a picture the address space is kind of like this okay so here is your 61c processor registers okay the program counter points to some address and the stack pointer points to some address typically the bottom of the stack and um other registers might point to things in the heap or et cetera and the fact that the pc can point to an address and it can fetch from that address means that we can have a processor that actually executes an instruction at that address okay so whatever we come up with with our threading and protection model it's going to involve accessing the address space okay and so what's in the code segment um well it's in the code segment's code that's not too surprising what about the static data segment anybody have any idea what would be in the static data segment so many of you have started looking at gdb great static variables yup global variables et cetera things that are um explicitly declared rather than allocated with malik good yup string constants all of those things are typically in the static data and that's loaded at the point where the program's first loaded while the process is being created what's in the stack segment anybody remember what is on the stack yeah local variables okay we're going to go through this more but you should look back at 61c and recall what the stack's about right so the stack is when you do a recursive call to a function the variables that were of the previous function are pushed on the stack and then the stack pointer moves down and then when you return you uh pop them off the stack and stack pointer moves up so i also see locals that's correct so local variables the uh how do we allocate it well we'll talk more about that one of the things that's going to be a very cool thing the operating system can do once you've got virtual memory is you can start the stack off with just a couple of of pages and then as uh the stack tries to grow it's going to cause page faults and the operating system will then be able to add more physical memory to the stack okay and the same is true of the heap so the heap is when you allocate things with malik or so on or you do linked lists all of those things typically lay in the heap and the heap is als also starts out with less physical memory than maybe the program ultimately needs and as the program starts to grow you get page faults which will allocate things on the heap okay so you don't have to worry about having caught all that now but i'm just giving you some ideas okay what's in the heap segment well i already said that that's things that you have allocated with malik think of structures with pointers think of linked lists think of all sorts of things okay there's a rather amusing comment in the operating system that they are really convoluted magic um maybe on the other hand i'm hoping that by the end of the term you'll see that they're just very clean magic okay all right certainly validating parts of the operating system can be easier to validate than a compiler now so our previous discussion of threads is what i would call very simple multi-programming okay all of these vcpus share the same non-cpu resources the only thing we virtualized with our current threads are the registers the program counter the stack pointer nothing else so that means they all share all the rest of memory they all share i o devices they all share everything else okay and that could be an issue now the question that's on the the chat which i find interesting uh here right now is can they uh each assume they have infinite stack or heap uh that's a really tricky question that we'll have to uh defer a little bit more um for for a week or so but the short answer is that uh if they actually are threads and they're saying in the same address space then the threads can mess each other up by overwriting each other's stacks but that's actually uh sort of a design feature okay if on the other hand you don't want that to happen you put these in separate processes so hold on for the rest of the lecture on that part okay now the os's job is going to be making the virtualization be as true as possible given the resources and whether it's a process or not so um so if each thread can read or write every other thread's memory maybe their data maybe their security keys and so on could it overwrite the operating system well so far i haven't given you anything that would prevent you from overriding the operating system and uh back in the early days of personal computing okay we're talking about some of the original ibm pcs some of the original macintoshes which were these weird-looking square boxes uh the early days of windows definitely all had this problem okay that uh yes we provided the ability to have illusions of multiple threads at the same time but they were all in the same address space and they could overwrite each other okay so we want to do better okay so is this an unusable environment well depends on your definition of unusable it's certainly not very secure okay and it's not even very secure against your own bugs okay and we'll talk a lot about that but you'd like a system that uh when you put a buggy piece of software up and run it it doesn't crash everything else that would be kind of a minimum requirement i would say okay so this approach as i said was used the very early days of computing it's used on some embedded systems still macos you know windows 3-1 windows me a lot of those different ones basically had this view okay however it's risky you know one of my favorite things to do you'll find out my favorite number is pi um because why not i mean it's a great number uh but it goes on forever but what's interesting is you can imagine that the magenta one decides to compute the last digit of pi and it never gives up the cpu locks out the timer interrupts and now uh blue and yellow never run okay that's a system we do not want to have and that is a system we used to have okay i worked on windows 3 1 systems where you put the wrong application in there and all of a sudden you know everything locked up so that's rather undesirable so no protection so the operating system has to protect itself from user programs and there are lots of reasons for this right from a reliability standpoint uh compromising the operating system generally causes it to crash of course it does security you want to limit the scope of what malicious software can do privacy i want to limit each thread to the data it's supposed to access i don't want my cryptographic keys or my you know secrets to be leaked and also fairness i don't want a thread like that one that decided to compute the last digit of pi to be suddenly able to take all of the cpu at the expense of everybody else okay so there's lots of reasons for protection and the os must protect user programs from one another okay prevent threads owned by one user from impacting threads owned by another one um all right so let's see if we can do better okay so what can the hardware do to help the os protect itself from programs well here's a very simple idea in fact very simple that so simple that little tiny iot devices can do this with very few transistors and the idea is what i'm going to call basin bound and so what we're going to do is we're going to have two registers a base register and a bound register and what those two registers talk about is what part of memory is the yellow thread allowed to access okay what part of memory is the yellow thread allowed to access now that we're still going to call this by the way i've got this uh sorry zero at the top and ffff at the bottom i've swapped this for you guys but um the we are going to be able to put two addresses one in base and a length or an address inbound depending on how you do it and now we're gonna see whether we can limit yellow's span to just the uh those range of addresses okay we're still gonna say the address space is from zero to all f's it's just that a big chunk of that address space is not available to the yellow thing okay and so what happens here is a program address that fits somewhere in the the valid part of the program what really happens is the program has been relocated it's been loaded from disk and relocated to this portion of memory one zero zero and so now when the program starts executing it's working with program counters that are in the say one zero one zero range which is kind of right uh where the code is and hardware is going to do a quick comparison to say is this program counter greater than base or is it and is it less than bound okay and these are not physical uh excuse me these are still physical addresses these are not virtual addresses yet we'll get to that in a moment okay and this allocation size is uh challenging to change in this particular model okay because in order to get something bigger we might end up having to copy a lot of the yellow to some other part of memory that's bigger so you can see that this is just a very primitive and simple thing okay but what it does do is it gives us protection so the yellow code can run it could do all it wants inside the yellow part of the address space but it can't mess up the operating system or anybody other anybody else's code okay all right and uh whether base and bound are inclusive or not that's sort of a simple matter of whether you include equals or not so let's not worry about this obviously base is inclusive in the way i've shown it here um so the other thing is for every time we do a uh lookup we make sure that we're less than bound so it's not inclusive on the top in this particular figure and greater than or equal to base and if that's true we allow it to go through and if it's not true then we uh do something like uh kill the thread off or something okay now the address here has been translated if you look this is what it might look like on disk it's got you know it's code starts at address zero there's some static data after the code maybe there's a part of the heap or stack that's going to be in there once it's loaded but in some sense it looks like everything started zero and however when we load it into memory we relocate all the code so that it starts at address 1000 is runnable from that point and as a result things uh execute properly so this is a compiler based loader based relocation okay but it allows the os to protect and isolate okay it needs a relocating loader now this by the way was what a lot of early systems did is they did relocation okay and had some basin bound possibilities to work so for instance the early some of the early machines by cray had this this behavior okay notice also that we're using the program counter directly out of the the processor without train changing in any so we're not changing any of the the latency through transistors because we're not adding any uh extra translation overhead as well okay and the gray part up here might be the os yes now if you remember in 61 c we talked about relocation um so for instance if you do a jump and link to the printf routine um what that translates into is a relocatable code where maybe the gel op code is uh hard-coded but the printf address is not until things get actually loaded and then this gets filled out so this might be a relative address until the linker and loader pulls it into memory okay all right so we can do this with the loader okay but a number of you have started to ask more and more about virtual memory well here's another version of virtual memory that is actually well it's uh the previous one was a hardware feature because in hardware we're preventing uh the program when it's running in user mode or as a user from accessing the os so that's a hardware based check okay it's not software all right now the uh a slight variation on the basin bound is this one where we actually uh put a hardware adder in here okay and this hardware adder one way to think of this is that addresses are actually translated on the fly so now we take our yellow thing off disk and we load it into memory and it might still be at address 1000 but the difference is that the program is now the program counter is now uh executing as if it were operating in this uh code that starts from zero but in fact what happens is by adding the base address to the program counter we get a translated address that's now up in the space where yellow actually is okay all right so this particular uh version of this is uh it's very simple and it doesn't require page tables or complicated translation okay so this is a hardware relocation so on the fly the program counter which is operating as if we're in the yellow region we add a base to it and the thing we actually use to look up in dram is the uh new address the physical address that we get from this virtual address added to the base pointer okay and can the program touch the os once again no because if the program address goes negative we can catch that uh and so that would be below the base address and if it goes too too large above the bound then we would also be outside of yellow and so we basically protect uh the system against the yellow okay and so um once again we're still doing checks here now can i touch onto the programs no because the bound catches it so one way to get at this is also with segments okay so in the x86 code or x86 hardware we have segments like the code segment the stack segment etc which are hardware registers that have the basin bound coded in that segment so a code segment is something which has a physical starting point and a length and then the actual instruction pointer that's running is an offset inside of that segment so the the code segment is very much like this base inbound because we do this addition on the fly and checking for the uh the bound okay and the question about where does the base address how do we decide what the base address is how do we decide what the bound is well that's the os is basically doing a best fit of the current things it's trying to run into the existing memory now a different idea which they did bring up in 61c which everybody's clearly familiar with is this idea of address-based translation so notice that what we just did was a very primitive version of translation where we took every address coming out of the processor and we added to it a base and we checked it against the bound and that translation now is just add a base and check about but um the thing we could do that's even more sophisticated is we could take every address that comes out of the processor and go through some arbitrarily complicated translator and have it look up things in memory so uh if you look at uh how that might be so let's think for a moment what was the biggest issue with this so there's several issues not the least of which to grow the space for the yellow process or thread i haven't told you how to distinguish those yet we would have to um copy the yellow thing somewhere else okay and and what we're going to do is then when yellow finishes and goes away we've got a hole we've got a fill and so there's a serious fragmentation problem so we'll talk more about that in an upcoming lecture but what we're going to do is we're going to break the address space which is all of the dram into a bunch of equal sized chunks all pages are the same size so it's really easy to place each page in memory and the hardware is going to translate using a page table okay this is 61c okay special hardware registers are going to store a pointer to the page table and we're going to treat memory as a bunch of page size frames and we can put any page into any frame etc we're going to talk a lot more about this don't worry uh in upcoming lectures and this is another 61c idea okay but just roughly speaking so just from a high level don't worry about the details yet but if we take something like a program counter or registers that are pointing at memory we go through a page table to translate them that's going to give us a part of memory and now we can do interesting memory management okay now by the way i am the reason i'm covering all of these ideas is i'm giving you something to think about okay and we're gonna in in the upcoming lectures we are going to fill in details on this okay but this is some part of the story okay so instructions operate on virtual addresses um and these are you know instruction addresses load store addresses etc they get translated to a physical address and this is the dram so the processor is looking in one address space a dram and another okay and any page of the address space can be any page size frame and memory etc this is going to be great once you get a better handle on this because it's going to not have the same fragmentation problem that the base inbound did that we talked about earlier okay now the third os concept that we want to talk about today is a process and a process is really an execution environment with restricted rights so if you remember we talked about our simple virtual threads having this problem that everybody had access to everybody's memory well we started to note how mechanisms for translation might protect us okay and so the idea of a protected chunk of memory that's owned exclusively by an entity in an os that's called a process okay so the process has an execution environment with restricted rights and one or more threads okay so it's a restricted address space and one or more threads it owns some file descriptors and file system context we'll talk more about that as we go on and it's going to encapsulate one more or more threads for sharing in a unique environment okay and the good question uh is how do we how do we protect this there's a question on the chat which partially addresses this which is of course if you have two processes and their translations point to different physical memory then kind of by design they can't uh get at each other's data because there's no way for a processor running process a to even address the data in process b okay and we're gonna that's the advantage of translation we'll talk a little bit more about that a lot more about that in upcoming lectures so a process is an address space with one or more threads okay and the application program when you start it up typically executes as a process so we'll talk about fork and exec and how do we create processes but the upshot is we create a restricted address space environment and then we can run one or more threads in it and now all of a sudden we've got a process and that becomes our unit of protection uh for the first couple of weeks of the class okay and the page table is really going to translate between virtual addresses and physical ones and it can do both in a forward and a reverse fashion so we're going to talk about page tables in quite a bit of detail so don't worry about the details yet they're they're going to be coming just think of the high level idea of translating for now so why processes well because we're protected from each other and the os is protected from them and processes provide that memory protection abstraction okay and there's this fundamental trade-off between protection and efficiency so if you have a bunch of threads all in the same process then yes they can communicate really easily because they share the same memory so they can communicate by one of them writes in memory the other one reads from it but they can overwrite each other okay so there are times when you want to have high performance parallelism where you want a bunch of threads in a process but then when you want protection you want to limit the communication between processes so communication is intentionally harder between processes and that's where we get our protection from so here's a view of two different types of processes here's a single threaded one and a multi-threaded one so uh the only difference between these two is that the multi-threaded one has more than one thread running if you notice this box that i show here for the single threaded process for instance is the protected address space so everything that's going on inside here cannot be disturbed by something going inside in a different process and in this example since there's only one thread we only have sort of one stack and one heap and the code and data kind of live in there and this protected environment is great for the thread nobody can disturb it but if it needs to communicate it needs to figure out how to communicate outside of its process a multi-threaded process actually has a different stack for each thread because you need a stack to have a unique thread of execution it has a separate set of registers for the thread control block so that when we switch from thread to thread to thread to give that illusion of multi-processing we need to switch out the registers from the first thread so that we can load them back from the second thread okay so threads encapsulate concurrency the address space is the protection environment you could kind of think of threads as the active part in the address space is the protected part that may or may not help you but the address space which the protected address space is really going to keep buggy programs or malicious ones from impacting each other okay and why do this multi-threads per process well there are many reasons you might do this one is parallelism so if you actually have multiple cores it's possible that by having many threads in the same process you can have many things working on the same task at once you learned about parallelism in in some of your early classes like 61a the other reason might be concurrency so a good reason to have many threads in a process could be well most of them are sleeping but thread a deals with mouse input thread b deals with window movement threat c deals with io to disk or network or whatever and so that concurrency is a situation where most of the threads are sleeping most of the time but it's a much easier programming model to think of things as a thread that runs for a while does some i o going to sleep and then wakes up when the i o is done and as you get more familiar with threads and how they work you'll be able to figure out kind of you'll have a better idea why that's helpful um so the question of is there a parallelism efficiency advantage you'd get from a process that you couldn't get from a thread so keep in mind that a process has threads in it so think of the process is the container and the threads are the execution element okay and yes these are a little bit like fibers but a little more heavy weight okay so why do we need processes for reliability security privacy okay bugs can over only overwrite memory of a process that they're in malicious or compromised processes can't mess with other processes now of course if the operating system is compromised every all bets are off but we'll even talk about later in the term we'll talk about how to uh set up situations where even if the operating system is a bit compromised the uh the things that are running in it might still be secure okay mechanisms to give us protection and isolation well we already talked about the fact that we need some hardware mechanisms for address translation we showed you the very simplest which was an adder in the hardware we we hinted at something much more complicated like page tables or whatever but also we have to worry about well if we have page tables why can't process a change its own page tables to point at process b because that would destroy all of the protection okay and so that leads us to our fourth mechanism which is we need the hardware to support some privilege levels of some sort and so that's the idea of dual mode okay so hardware provides at least two modes okay and um the two modes are kernel mode and user mode okay or supervisor mode and uh certain operations end up being prohibited when you're running in user mode so when you're in user mode you can't for instance change which page table you're using that's something only the operating system in kernel mode can do you can't disable interrupts okay so that way a process that's decided it wants to compute the last digit of pi can't prevent other ones other processes from getting cpu time when the timer goes off okay you're also prevented from interacting directly with hardware et cetera thereby not being able to breach files on disk and now the question is what's our carefully controlled transitions between user mode and curl mode things like system calls interrupts exceptions okay and so you could think roughly speaking that we have user processes they make a system call into the kernel that's a transition from user mode to kernel mode that's very well controlled we'll talk more about that and then the kernel does a return back to user mode for the user to run okay and so this is a typical unix system structure monolithic unix system structure where uh kernel mode represents code that has secure access to all sorts of resources the it controls the hardware directly and then of course user mode is something that uh it's all of your programs and your libraries and so on and so user mode is your application but then it uses services from kernel mode and that's the operating system kernel okay so for instance here's an example where we've got hardware got kernel mode and user mode the hardware might execute or exec a new process okay and then later we'll exit and return to kernel mode okay a system call from user mode would go into kernel mode and then it might return later or an interrupt might cause user mode to go into the kernel which then might check out the hardware somehow and then eventually do a return from interrupt an exception which is something like you divided by zero or a page faults an exception might go into the kernel and then eventually return okay now there's additional layers of protection than just the two i talked about here and we'll when we get into talking about virtual machines and actually containers like docker and so on we'll talk about how to put more layers than just the two but for now we're dealing with dual mode okay now tying it all together uh we can tie it all together very simply okay um and give me uh so tying this all together uh is the following so if you notice here i have two processes a green and a yellow one okay and uh yeah that was like a page full and uh if you look at uh the os here is the gray code and if you notice our system mode right now is kernel mode so it's red and uh it's on and when we're in kernel mode with simple branch and bound what you see here is that there may be base and bound registers but they're being ignored because in system mode we have access to the full address space okay and let's take a look so the os is going to load a process okay what that really means is it's going to take this uh a register which we're going to call the user pc load it with a pointer to the code to the starting part of the yellow code and if you notice uh there's going to be a bound or an uh potential uh top of of that uh area and what we're gonna do is we load the the yellow off of disk we set up these registers okay but notice by the way since the os is running uh the pc is still pointing to gray not to yellow okay but what we're going to do is then we're going to execute a return from interrupt or return to user and that's going to start us running the yellow code now the question is why does stack grow up in these diagrams that's because i've got year 0 and fff reversed so the lower part of the date of the address is up top here in the higher part is uh on the bottom sorry about that um but notice right now the the kernel has full access to everything if we um now do a return to user what's going to happen is um that we're going to activate this yellow one okay so the privileged uh instruction is to set up these special registers like the basin bound registers are going to get set up and so on so notice we've set base to the beginning we've set bound to the end we've set up some special registers we've set up the user pc and we're going to do the return to user mode and that's going to basically do two things one it's going to take us out of system mode which is going to activate these base and bounds and it's going to cause the user's pc to be swapped in for the existing kernel pc and now all of a sudden after i do that voila we're running in user mode okay why do i say we're running in user mode the answer is that um right now because we're in system mode not we're not in system mode we're in user mode the base and bound are active and so the code that's running can't get out of this little container okay all right and um coming back so how does the kernel now switch so now we've got this guy running what do we do well we're going to have to take an interrupt of some sort and say switch to a different process okay so um the first question we have to ask before we uh figure out what the switching is involved is how do we return to the system all right and i showed you some opportunities there a little uh a moment ago but we have three so for instance system call uh is one where the process recov requests a system service that actually takes it into the kernel okay another is an interrupt okay this is the case where an asynchronous event like a timer goes off and takes us into the kernel and a third one is like a trap or an exception um it turns out that these could be examples where uh we get a page fault or where we divide by zero okay now the interesting question that's on uh an interesting question in chat which i don't have a lot of time to answer right now is uh was what if a program can needs to do something that can only be done in kernel mode the answer is you got to be really careful so one answer would be you can't you got to do only the things that are provided as apis from the kernel that's why the set of system calls is so important to make sure it's general enough for what you want to do the second answer gets much more interesting which is typically not something we talk about at this term in 162 but we could maybe and that's where we have a an interface for downloading specially checked code into the kernel to run in kernel mode uh in a way that it doesn't uh compromise the security but uh that's a pretty interesting topic for a different lecture so we could we could uh invoke any of these things system call interrupt or trap exception to get us into the kernel so that we can do a switch so let's assume we do an interrupt okay and these are all unprogrammed control transfers so how does this work we're going to talk a lot more about this in the next in another lecture but the way this works to have the interrupt interrupt into a well-defined part of the kernel is we're going to actually have that interrupt the timer interrupt look up in a table that we have put in there when the the os boots and pick the interrupt handler and that interrupt handler is now going to run and make a decision about whether it's time to switch from process a to process b and this is by the way the topic of lecture in a week or so okay so we're getting close to that those topics so um so let's continue here's our example the yellow code's running and if you notice the program counter again is in the yellow code and so on and how do we return to the system maybe an interrupt or io or other things we'll say an interrupt for this and what happens at that point is we now back in the kernel so notice that we're at system mode we're running the pc is the interrupt vector of the timer and we've got these registers from the yellow which uh have been saved as a result of going into the interrupt and so what we're going to do is we're going to save them off into the thread control block we're going to load from the thread control block for green okay and then here's by the way somewhere in the kernel is the yellow thread control block and then voila we return to user and now the green one's running so really this idea of swapping between processes and using dual mode execution is pretty much shown by this example i just showed you there which is you run for a while at user mode the timer goes off you save out the registers you load in other ones you return to user again and you just keep doing that back and forth all right and the question about how the current process knows if there's an interrupt the answer is it doesn't what happens is the interrupt goes into the kernel and saves all of the state and the yellow code in a way that when it's restored and starts executing again it doesn't know okay all right in a typical time period for how frequently the timer goes off uh it's typically like 10 or 100 microseconds between timer ticks okay now we're uh we're now officially out of time but i want to leave you with one more concept what if we want to run many programs so now we have this basic mechanism to switch between user processes in the kernel kernel can switch among the processes we can protect them but these are all kind of mechanisms without sort of policy right so what are some questions like how do we decide which one to run how do we represent user processes in the operating system how do we pack up the process and set it aside how do we get a stack and heap et cetera et cetera all of these are interesting things that we're going to cover um and you know aren't we wasting a lot of memory all of these things okay and um so there is a process control block just like the thread control block don't worry uh we'll get to that but that's where we saved the process state and inside of that will be the thread control blocks for all the threads that are there and then the scheduler is this interesting thing which some might argue this is the operating system which is every time or tick it says it looks at all the ready processes picks one runs it and then uh the next time or tick it runs the next one and so on and part of that process is unload and reload unload and reload with the some task called the scheduler selecting which is the right one based on some policies all right so we are done for today so in conclusion there are uh four fundamental os concepts we talked about today the execution context which is a thread okay this is what you learned about in 61c didn't call it a thread because it wasn't properly virtualized yet but it's basically something with program counter registers execution flags stack we talked about the address space is the visible part of the uh to a processor it's the visible part of the addresses and once we start adding translation in now we can make protected address spaces which are protected against other uh processes we talked about a process being a protected address address space with one or more threads and we talked about how the dual mode uh operation of the processor hardware is what allows us to multiplex processes together and give us a nice secure model all right so there we go that is uh in a nutshell a modern os so um that'll be the end of this class there'll be a final uh in several months and oh wait i'm just kidding i hope you guys have a great night and we will see you on wednesday ciao thank you |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_22_Transactions_Cont_EndtoEnd_Arguments_Distributed_Decision_Making.txt | welcome back everybody uh to um cs162 we're going to continue our discussion of ways of uh getting reliability out of file systems and then we're going to dive into some interesting material on distributed decision-making um if you remember last time we were talking about one of the ways that we get performance out of a file system and that's with a buffer cache and the buffer cache of course is the um chunk of memory that's been set aside to hold various items including dis blocks and the example that I've shown here was basically that uh when we talk about a file system and we have uh directory data blocks and iodes and data blocks Etc they're actually put into the buffer cache which is typically handled lru and uh is the temporary Waypoint for data moving in and off the dis and this is of course the starting point for allowing us to read and write single bytes of data at a time but it also is an important uh performance enhancer and we and we talked among other things about um keeping dirty data in the buffer cache and not pushing it out to disc right away and that that had some pretty important uh performance benefits it also has some potential issues with reliability If You Should Crash and the dirty data is still only on in memory and not on dis so um the other thing uh that we started talking about then in that along those lines was what I like to call the ilities and so that's availability durability and reliability uh and keep in mind that availability is kind of the minimum um bar to meet and it's not a very good one oftentimes so availability is typically the fact that you can actually talk to the system and it will respond to you it doesn't say that it'll respond correctly and um the other thing that's often the case is uh we'll talk about number of nines of availability so three nines typically means that there's a 99.9% probability that the system will uh respond to you more important than availability in my opinion at least is durability and reliability durability says that the system can recover data despite the fact that things are failing and uh reliability is the ability of the system to essentially uh perform things correctly and that's really what you want is you want reliability not availability okay all right and and by the way the example I like to give about the difference between durability and availability for instance is that uh if you think about the Egyptian pyramids there was a time when people didn't know what the various hieroglyphs meant that those uh what was written on the pyramids was extremely durable but it wasn't available because people couldn't uh decipher it okay and it became available only after the Rosetta Stone was discovered so the other thing uh we talked about last time is we started talking about ways to protect bits not necessarily ways to protect the Integrity of the operating system and file system so to speak but Integrity of the bits and we talked about raid which you know from 61c and in general raid uh X you know whatever your level is is a type of eraser code which is uh a code in which you know certain discs are gone and you fill in the missing discs using the code okay that's called an eraser code and the reason you're able to do that is essentially because the diss uh have error correction codes on them that let them recognize when the discs themselves are bad and then you treat the whole disc as an eraser and you bring in the raid codes and what I did say was that today discs are so big that raid five which is what you learned about in 61c for instance is really not sufficient because uh it can only recover from one failed disc and uh discs are so big now that while you uh are busy recovering that disc by putting a new one in uh it might fail again and at that point you just lose all your data so if you ever have a big file system on a big file server make sure you pick at least raid six which is a possibility of two failed discs and um for instance even odd is a is a code that works for two diss uh there that's available on the readings in general you can do something that um called a a general read Solomon code like um this based on pols and if you remember um as I I mentioned this last time but I thought I'd put this out there uh when you were learning about polinomial back in grade school what you learned was that if you have a an Min minus one degree polinomial here uh as long as you have M Points then you can reconstruct the coefficients okay and so the the clever trick uh with readed Solomon codes is you start with something that behaves like real numbers called a gallawa field we can talk about that offline if you like and then you put your data at the coefficients and then you just generate a bunch of points and here's an example where I generate n points where n is bigger than M and as long as I get M of them back then I can recover the polinomial and then I can get back my data and so that's an eraser code because I can erase any number of these uh points here as long as I still have M left so I can erase up to n minus M of them and still get my data back and that's a pretty powerful code and you can choose how many uh you need to recover from how many failures okay and so oftentimes in Geographic replication you can arrange to be able to lose uh you know 12 out of 16 chunks of data and that's extremely efficient good so um I'm glad that cs170 also talked about this in in general um the uh other thing we talked about last time by the way was there were there any questions on on eraser codes at all so um well you know that raid five is a simple as uh xoring um even odd is is a slightly different type of xoring so that's those are all very fast operations the readed Solomon um codes come in a bunch of different forms some of which are fast and some of which aren and so um there's a bunch of different types of read Solomons which are all isomorphic to this idea but they're rearranged in a way where it's really fast to encode um in some instances and then it's it's pretty fast but um typically the decoding phase is an N squared uh complexity so decoding can be uh when you failed it can be expensive um so the other thing I talked about uh was well we we' were looking at file systems like the fast file system and NTFS which are overwritten when you write new data so when you put new data into a file you overwrite the blocks that had the old data in it an alternative uh which you might imagine is a lot more um reliable as copy on write file system so here's an example of a file system where um I'm just showing you a bi a binary tree think of these as the uh the pieces of the iodes and the old version of the file sort of the blocks are down here in blue and they're in this tree and the idea behind a copy on write system is that if I want to say write some new data at the end or overwrite something I don't actually overwrite the original data but I build a whole new version of the file that uses as much as the old one as possible so here was an example where I took this old block here I added some new data to it and I made a new block with a copy and now by time uh my new iodes in with the old ones I have uh by following the new version you can see that we've got a new version of the file with this is updated but the old version's still there and so if I have a really bad crash in the middle of writing the new version I can still recover the old version and I can pull various tricks to decide how much of the old version to keep around or how many old versions to keep around and um this is much more resilient to random failures okay and and there's uh file systems that are like that now it would be um potentially the question here is this more expensive in space or time it's certainly is uh more expensive in space if you want to think of it that way but what you're getting back is um extreme resilience to crashes and failures and the ability if you decide that this uh that you wrote something incorrectly you can go back to a previous version so this has some pretty nice benefits you get from the space overhead because you notice that we're um we're not deleting old data right away um and it's can be a little bit more expensive in time if uh you have to worry about how these things are laid out maybe it doesn't have as fast of a read performance as something like the fast file system might be so um what about more General reliability Solutions well if we wanted to go back to the fast file system let's say because we were worried about performance and we wanted to make sure that the file system and the operating system couldn't crash in a way that leaves things uh vulnerable then what might we do and one of the things we talked about was very carefully picking the order you write the blocks and then you write the iodes and then you put the iodes in a directory and so on and you do this in an order such that if it fails at any point you can kind of throw out the things that weren't quite finally committed and um go through a pass on the file system and find everything that's disconnected and you're good to go the problem is that requires very careful thought so more general idea here is to use a transaction which youve probably heard about if you've taken any of the database classes but the idea here is that when you go to update a file you're going to use transactions to provide automic updates to the file system such that there's a single commit point in which the new data is uh or the new version of the file system is ready to go and until you reach that commit Point any of the things that you do to the file system can be undone now if you think back to this copy on wrs uh example as I'm writing everything here and producing my new version the old version is fine so if anything gets uh screwed up including just throwing out the new version uh the old version's still there and if the only thing I need is to swap the old version for the new version which with a single operation that's a single point of commit for the new file system okay and so that's kind of like a transaction um the transactional ideas are a little bit more General okay and so we're going to use transactions to give us clean commits to the uh Integrity of the file system and then of course we're going to use redundancy to protect the bits so the bits can be protected with uh read Solomon codes and eraser other error correcting codes raids Etc okay now just to remind you a little bit about what we mean about transactions it's closely related to critical sections uh that we talked about earlier in the term they extend the concept of atomic updates from memory which is where they came up originally in ear part of the term to stable storage and we're going to automically update multiple persistent data structures with a single transaction and as a result we'll never get in a situation where the file system is partially updated and therefore corrupted so there's lots of ad hoc approaches to this transactional like thing I just talked to you through the copy on right the fast file system uh they originally would order sequences and updates in a way so that if you crashed you could do uh a process that scanned the whole file system called fsck to recover from that those errors um but again that's very ad hoc so this idea of a general transaction is like this you start with consistent state number one in the file system and you want to get to consistent state number two maybe consistent state number one uh is the original file system and number two is what you get when you add some new files and directories and data and the transaction is a atomic way to get from the first state to the second one and we know underlying the uh those that single Atomic view change here there's going to be a whole bunch of underlying um a whole bunch of underlying changes to individual blocks the question in the uh the chat here is what did I mean by ad hoc what I mean by ad hoc is that a person sits down and they very carefully think through well if I update this and then I update that and then I update that and then I update that and the final thing I do is this then I know that if it crashes anywhere along the way I'll be able to recover the original file system so ad hoc here means that you come up with a A solution that is uh maybe it works but you've had to go through a long process of thinking it through to make sure it works and it's possible that you've got it wrong okay so that's what I mean by ad hoc here we want something a little more systematic okay so um and we're going to use transaction for this um so Atomic here Atomic is really the process of making sure that uh either everything happens or nothing happens okay and atomic in the log will happen even if the machine gets unplugged you want to make sure that we still have that Atomic property probably if you unplug it and you've got this Atomic property what's going to happen is your changes aren't going to happen Okay so let's let's walk through this a little bit more so transactions are going to extend this idea from memory to persistent storage and here's a typical structure of course you start the transaction you do a bunch of updates if anything fails along the way you roll back if there are any conflicts you roll back um but then once you've committed the transaction then that mirror Act of the of the commit operation causes everything to be permanent now okay and so we'll talk about how to do this in a moment but um this do a bunch of updates thing could be arbitrarily complicated it could be allocating new iodes it could be grabbing some new blocks it could be linking them it could be doing all sorts of stuff and the point is that none of that is going to be permanently affecting the contents of file system until we commit and so that's what we're going to try to figure out how to do okay that's the atomic here is all of a sudden it it happens or it doesn't happen at all now um of course a classic example you know uh transfer $100 from Alice's account to Bob's account you see there's a bunch of these different pieces right Alice's account gets debited 100 uh the branch account uh that 100 goes to the other uh bank and then um Bob's account somehow gets the balance and so on and so uh there are a series of operations in different parts of various people's databases and if only some of them happen then the banking system becomes inconsistent um for instance if it crashes the whole system crashes between de debiting Alice's account and incrementing Bob's account then not only did Alice lose money well she didn't get her $100 but Bob didn't get it either and so that would be bad okay and so this idea of beginning transaction ending committing transaction is one in which none of these things happen until the commit now modern operating systems uh the question is do they expose the transactions to the user um depends a little bit on uh which file system you've got certainly there are some Notions of transactions that are available others are uh others are less available right now what we're going to talk about is mostly under the covers in a way that the user doesn't have access to so the concept of a log to make all this work is the following if you look at all of these pieces I've got here that represent parts of a global transaction I'm going to write them in a chunk of memory slisk that sort of uh think of this as a this is the log and think of this as a big chunk of dis and all of these things are going to be in there and they might be interleaved with other transactions but what we're going to do is View this log serially starting from the left and going to the right and uh we're going to start the transaction by putting a start transaction marker in the log and then we can go ahead and do all of our stuff and everybody else can do their stuff and it's only when we put a commit transaction at the end that now all of a sudden these actions automically happen okay now a couple of things that should be clear from this one is when I put start transaction that needs to get committed to the log kind of before anything happens and then when I put my various actions in here before the final commit happens it has to be the case that all of these other things are in the log so it can't be the case that I do a commit it gets on disk but all of these other things are still in memory somewhere because then the machine could crash and I see well start transaction commit transaction but I have no idea what I just committed and that would be bad okay okay so um the log is is clearly going to be something that we're going to need to be pushing out to disk and it's going to be have an ordering requirement that's very important in order to make this all work and the other thing that I'll point out here is notice that if I write these uh operations in the log dump dump a b c d dump and then I say Commit This doesn't necessarily mean that I've actually put them into the file system or actually produced the actions yet what it means is that if if I were to crash and I hadn't done them yet I'd be able to uh wake up after the crash and go through the log and figure out what the the state of the system is supposed to be so the state of the system is not only what's on disk in the file system but also what's in the log and um and ordered in a way that I can go back and reconstruct after a crash and I'm going to show you um a couple of animations here just to give you a better idea how that works all right okay so the commit is like sealing an envelope and saying the transaction now happened now um so now the question is uh and shouldn't things be logged after they happen well uh in this type of log we're we're doing um something in which it's called write ahead logging we're actually writing into the log before it's put into the file system okay and the reason for that is so it's the opposite of the way you're thinking of this I think um we want to write in the log first rather than modifying the file system so that if the commit never comes because we crash then the file system's okay if we were to start modifying the file system and then put the commits into the log now we're in a bad situation where we might have already corrupted the file system by doing a partial update okay so this is uh I'm glad you asked that question this is the opposite maybe of the way you were thinking so thanks for that clarification question um so here's a transactional file system uh example so we're going to get better reliability through the log a log changes are treated as transactions and a transaction's committed once it's written to the log data is going to be forced uh to disk to get for reliability uh there's a possibility of using nonvolatile Ram or Flash or whatever to make this faster because we can put things into non-volatile RAM maybe maybe more quickly than we can write it to the dis so perhaps the EnV Ram can serve as the head of our log um and alth the file system may not be updated right away the data is going to be in the log now the question here is does the log negate the performance benefits of a buffer cache uh and the answer is it depends um it depends on what you're logging not everybody uh not all versions of journaling file systems we'll talk about that in a moment write all the data to the log first and then back to the file system okay um so let's just uh let's go forward a few more here before um I answer that last question in the in the chat here and then maybe I'll answer it for you hold on one second okay so the difference by the way between a log structured and a journal file system is in a log structured file system uh all the data is only in the log it doesn't even go to the file system whereas in journaled file system the log is really just helping us get reliability okay and um when do we start logging well as soon as we've started up the file system we start the logging okay all right now um maybe I will uh just give you a little bit of a preview here the question that's in the chat which I hadn't answered yet is if not all actions have been completed and you crash how do you figure out which have and haven't uh been completed and let me let's just hold that question and see if this gets answered okay so we're going to focus in the next several slides on something called a journaling file system where we don't modify the data structures on the disk directly uh right away we write updates in as a transaction into the log kind of typically called a journal or an intention list and um then when we commit then we're going to have the potential to put them into the file system okay once changes are in the log they can be safely applied uh to the file system modifying IE pointers directory mappings Etc and the question that's uh in the chat here about well do we have to have all of our operations be idempotent to make this work the answer is no and we'll see how this works in just a second so well some of them need to be idempotent but um let's see if this answers your question so garbage collection is uh going to be a possibility here so once we've actually applied things out of the log successfully into the file system then we can remove things from the log okay now Linux essentially took the original fast file system called it EXT2 uh and then they added a journal to it to get ext3 so ext3 is really just like a fast file system Linux Style uh with a journal okay and there are a bunch of options that Linux gives you about whether for instance to write all the data to the to the uh log first and then to the file system uh and that double writing surprisingly enough uh doesn't always hurt you from a performance standpoint because the log remember is is sequential and so it's very fast so a lot of other examples of journaling file systems NTFS Apple HFS plus Linux xfs JFS C xt4 there's bunch of options here okay so let's create a file but this no journaling yet so think of this is like Fast file system or ext3 so we can see where we're going with this so when you create a brand new file and write some data there's a bunch of independent things that have to happen so first thing is you got to find some free data blocks so here's an example let's call this yellow thing a single free data block have to find ourselves a free inode entry so on in the iode table find a an insertion point in the directory so maybe there's some blocks on the directory we're going to change excuse me all right and then we're going to link things together so we're going to write uh the map which basically says uh Mark which blocks are in use okay we're going to um write the iode entry to the blocks we're going to write the directory entry to point to the iode all right and when we're done uh now we've got a a new um pointer in a directory this a mapping between a name and an i number that points to an iode which we've allocated which points to a dis block and now so notice all these different individual pieces uh and and the free space update uh have all happened just to create a file and write to it and if we sort of partially do this and we crash then we're going to end up with dangling blocks uh like for instance if we didn't successfully write the directory entry then we could have an note entry pointing to a data block and it's not in any directory and it's effectively lost okay so let's see how we could add a log to this or a journal so if you notice here um we're going to put this log in some non-volatile storage flash or on disk for instance is the simplest thing it's going to have a head and a tail so the head is the point at which we write the tail is the point at which we read okay and let's go through and see what happens when we write our new file so we're going to first find a free data block and notice that I found the block but what I'm going to do is I'm actually going to find my free I note entry going to find my directory insertion point but I'm not going to actually do anything instead what I'm going to do is I'm going to write a start transaction in the log I'm going to uh write the free space map I'm going to uh write the inode entry uh pointing at the uh you know which where it's supposed to go so I'm going to excuse me write an iode entry here without actually writing the disk and then I'm G to write a directory entry without actually writing it on disk okay and notice that all of these things are reversible because if I crash at any point up until now I haven't actually modified anything in the file system so the file system is going to look exactly like it did before I started this process okay now when I hit commit poof all of a sudden it's committed now think this through for a second notice there's no changes to the disk okay and yet the mere Act of writing commit to the log now makes that file committed and the reason is that the state of the file system is considered what's on disk Plus what's in the log okay and so if I crash at any point after the commit gets written in there what I'm going to do is I'm going to scan the log and at that point I can apply the updates to the file system and things are going to look okay and I can keep crashing so these are idempotent this was a question earlier because uh the um the log has basically been choosing blocks for us but we can keep overwriting the same block over and over again with the same data and it's not going to matter and so I can keep trying until I eventually get past the commit at which point the file system will actually be updated to reflect this change so the mere Act of writing the commit in the log means that that file has been written when it with its new data okay so after commit we can replay the transaction like I said supposing we don't crash we can replay the transaction by just writing stuff on the disk eventually copying everything there and once it's copied then I can start moving the tail see how I'm applying stuff and if I get past the commit at that point then I can throw out everything that's in the log okay now here's a good question in the in the um chat here so what about reads do they have to scan the log for changes that haven't been flushed yet no this is where the block cache comes into play right so the block cache has the most up-to-date state of the blocks uh as as reflected by the total state of the file system including what's on the log and what's on disk and so the block cache uh since the block cache filters the reads and writes from the user uh it basically makes everything fast regardless of whether it's actually only in the log or if it's on disk okay so the block cache is important aspect of making things fast here now uh the question here you can't flush until you commit uh that's correct so this is again right ahead uh logging here well right ahead logging says that you have to get the log values on the in the log before you hit commit once they're in the log then you can flush things out to dis so yes you have to get them to the log first and then they can be flushed onto uh onto the file system on the disk well if the cach is full and it's a really large right then uh then you have to make sure that uh you've committed first okay um so there's a lot of uh potentially complicated questions about scheduling here and when you're allowed to schedule things Etc um I don't want to go into it too much right now but what I will say is you can imagine that uh the file system knows when it's in TR when it might be in trouble by allowing too many rights uh before the log has been cleared and all it has to do is put the clients to sleep uh until things have been properly flushed and then it can wake the clients up okay and so this is you just have to keep track of what the current state is so that you always have this right ahead logging property okay now um once we've committed everything then we can just throw out the log and the tail has moved here all right um what's the size of the log that's changeable so um depends on how much data you want to have now notice by the way that what we've got in this particular log actually didn't log the data necessarily so we could write our data to uh the disk and it's only the metadata that's logged that's one of the modes um that's one of the modes that basically uh the Linux file system has in another mode is one in which the data first goes into the log and then goes back out on disk okay now uh if the system crashes after commit but before we've fully applied everything that's okay because we can just keep restarting because we don't remove things from the log until we've actually gotten past commit uh with the thing not crashing and everything pushed out to dis that's a point at which we do a single atomic move of the tail which uh throws out this particular entry so we don't really need to know exactly which changes are have been applied we just take wherever the tail was you know so the tail might be here and uh we're trying to apply and we keep crashing over and over again well we just we can restart um and it's back after we get past the commit that we can then uh throw that log entry out okay this particular version of this the changes are empit there are other ways you can do things but we're going to leave it this way for now okay now um let's look at this uh and situation here where we started that process and we crashed um and notice this is what it looks like after crash so maybe we found our blocks and we started to write our updates we didn't get a commit record in here then all we do is we just uh detect at this point that we've crashed and all we have to do is we just throw everything out that hasn't been uh committed yet and we're we're good to go all right and all of this stuff can be thrown out I didn't quite have a a good um example here but you can basically throw out things that you haven't touched and um transactions without commit records that are ignored from that point on all right um the other thing is if uh we recover and we have complete transactions we scan the log we find complete start commit uh um examples and at that point we can just redo as usual and um in the process we update our uh block cache and then once we got past that part of the boot then everything works as normal all right so I've just given you the start but I I hope it gave you the idea what's going on so why do we go to all this trouble the answer is that updates become Atomic even if we crash okay so we either get all it's either a applied entirely or not at all and so all of these physical operations and there potentially many of them are single logical unit okay we get an atomic update um now you might ask isn't this expensive well it is expensive if we are in the mode where we're writing all the data twice except the log is typically um sequential on the dis and so the cost of writing to disk in the log is actually much faster than trying to write all of the different pieces throughout the file system so that's actually faster than you might think and there are some circumstances where this right to the log with your data and then put it into the file system can actually give you some boost in performance under some circumstances okay especially when you've got a bunch of random rights then you can get them out on the dis quickly um so modern file systems give you an option to to do metadata updates only in the log and this is where you're going to record the file system data structures like Dory entries iode use and and Etc and what happens in the worst case where you crash but you haven't flushed your data is now you get a file with garbage in it but you don't lose a bunch of files okay and so that's a trade-off between uh reliability and performance it gives you sort of an option uh to do slightly less than uh full atomicity when it comes to the data itself okay now um a full buffer cache is is uh could be an example yes where your right call comes back and not everything's been written that's correct okay all right now let's talk briefly about something that I wanted to remind everybody of are there any more questions okay and so by the way ext3 is the uh is the version of the Linux EXT2 file system fast file system that's got a journal in it and all they did was they took that file system and they had a special file that serves as the journal so all right so I wanted to remind everybody because we've had some people that I think have forgotten a little bit about the collaboration Pro uh policy for cs162 so you got to be careful okay we we do not want uh people importing parts of code from other people okay so things that are okay here are for instance explaining a concept to somebody in another group could be okay but don't explain exactly how to do something if it's a concept that for instance I talk about in class that's a perfectly okay thing to talk about okay um discussing algorithms or strategies at a high level is probably okay okay discussing de debugging approaches like um sort of using you know how do you produce uh print FS that you can go through easily to find out what's going on or you know what is your overall structure for testing those kind of things are okay uh searching online for generic algorithms like hashtables that's okay all right things that are not okay are things that are likely to get caught by our uh catch by uh the code that we run to to uh catch collaboration cases all right so sharing code or test cases explicitly with group or write out uh copying or reading another group's code so you shouldn't be even looking at other people's code okay or their test cases uh copying or reading online code or test cases from prior years that's not okay okay so if if you're straying into specifics about a particular uh project or homework you're you're probably in the Red Zone okay helping somebody in another group to debug their code that's also not okay um we did have a good example in a past term uh where somebody sat down with a group that was having trouble and they were helping him debug and um but they this person sat with them for so long that as they kept kind of incrementally changing their code the code ended up with a structure that looked so much like uh this uh the helpers group's code that the two groups uh were flagged for over collaboration and that that's a problem okay so uh um be very careful not to do that okay because we want you to be all doing your own personal work on homeworks and uh exams of course and your own groups work in group work okay so we compare all the project submissions against prior prior year submissions online Solutions Etc and we will take actions uh against offenders that um have sort of uh violated this code okay and you can take a look on um on the homepage we have uh discussion of this in more detail um so and don't put a friend in a bad position by asking for help that they shouldn't give you okay we've had in past terms we've had people that have pleaded with friends of theirs uh until the person just gave them some code to get them to leave them alone and um that ended up uh ending not well for both of the people so just uh try to do your own work okay and I remind you this because we we have caught uh what appear to be a number of collaboration cases and uh we've only gone through some some of the things so try to try to not put yourself in a bad position okay all right now that we have stunned everybody into silence Let's uh let's talk about some real topics again here so I'm gonna assume that everybody will be very careful okay so let me um take this idea of logging that we just had in journaling and take it to its extreme okay so one extreme is called the log structured file system which is an actual research file system on the Sprite operating system that was uh I have a paper up in the resources page you can take a look at it and in this case it's like what I just told you with the journal but there is no file system underneath so the log is the storage okay so the log is one continuous sequence of blocks that wraps around the whole dis inodes get put into the log when they're changed uh data is put into the log Etc and everything's just in the log okay so here's an example where we create two new files deer one file one deer two file two um and write new data for the files and here's the log and notice this is a Sprite log structure file system and notice what happens is that there were some blocks and stuff in the files uh there were some parts of the file system in the log prior to this picture but we're writing file one and what we do is we write some data which goes into the log and then we uh we change the iode for the directory that also goes into the log okay and then we write some data for the second file and for the directory and all that stuff goes into log and ultim ultimately uh if the since the inodes for deer 1 and deer 2 have changed then we're going to put uh the updated iodes for say the root file system also in the log and when all is said and done all of our data is in the log it's just in the in the order in which it was written all right and um we never take it out of the log just stays in the log and then if we overwrite say uh part of file one what'll happen is we'll put the new overwritten data and then we'll put a new iode which links to it and so on and at some point the data is uh going to be kind of obsolete in parts of the log there'll be a bunch of holes and at that point we're going to do some garbage collection but up until that point the log is the file system okay and it's kind of like git yep there's a little bit of of that aspect so here's an example of the Unix file Sy fast file system where when we write data we're actually um writing the data on the Block groups where it was intended to be close to the inodes um for that directory so here's an iode for the directory we write some directory data here's an iode for the file we write some file data it's in a specific spot on the disk it's been laid out in a way to try to make it fast and if you notice the data here is laid out to be fast for reading but the rights go all over whereas the data in the log structure file system is made to be very fast for writes but reads will suffer and the whole if you read that paper which is which is a classic what you'll see as a justification for this is right bandwidth is often at premium so you're going to make it the rights go really fast and um you're going to rely on the Block cach to be big enough to give you really fast read performance okay and um so the other interesting aspect of this is as as we've been talking about transactions is that um this fact that things are structured as a log means that we can really easily undo things if we've got a failure okay and so part of what's in here are commit records and so if we crash in the middle of writing then we just go back to an earlier part of the log and our file system is good to go without any changes so the log structured file system kind of has built into it this idea of journaling because the log is the file system okay now um so the logs what's recorded on disk uh file system operations to figure out what's going on kind of logically replay the log to figure that out and put things in the block cache to make it fast okay everything gets written in the log um large and Portion uh large important portions of the log is cached in memory which is how we get things to be fast and you do everything in bulk so the log is a collection of large segments on the discs that are uh completely uh sequential relative to each other to make things f fast and if you read the paper you'll see that rather than what I first told you where there's a single log that goes through the whole disc in fact there's a whole series of these big segments and they garbage collect in segments all right um and the way you get free space back is you got to garbage collect um all of the holes that are in the log after you've overwritten data and so there's a garbage collection process too that we won't go into for now all right now the reason I brought this up is one thing I promised you a couple of weeks ago but never did what about flash file systems okay how are they different from the fast file system and I wanted to remind you what flash is like so this is a um a camos transistor which you've uh probably seen in some of your early classes and the idea here is that um when this floating gate is uh High then uh we end up with essentially turning a switch on so that the data can flow through this switch and when this uh floating gate is low then the switch is turned off so without this extra gate we um or say no floating gate just the control gate we end up with a transistor the way that flash works is uh don't want to say that yet the way that flash works is we actually trap electrons in this floating gate uh which has oxide on either side of it and the result of trapping the the electrons in there give us enough of a difference that we can detect and that's a way that we can store a one or a zero in here uh in distinguishing from the um non charge trapped State okay the thing that's funny about this is we can't write it once we've written it we cannot overwrite it until we erase it so if you remember um I talked about this a couple of weeks ago you can never overwrite Pages what you need to do is you need to erase Big Blocks of bits and then you keep them on a free list and you get these uh 4K byte pages that you use off the free list to build your file system with and then eventually um you garbage collect a big block and erase it again okay and so this is a little different from say a disc okay um and another thing that's important here is that these the way I write as I as I alluded to is I trap electrons on this floating gate now the way that that happens is I raise this word line so high that the electrons go zooming across the insulators and get on the floating gate and if I go even higher I can encourage them to go away and clear the gate off well that's a pretty uh harsh process and eventually electrons get trapped uh in the ins in the insulator and then this doesn't work as well and so the flash actually wears out and so anybody making a file system out of this has to be careful not to erase and overwrite too many times okay and uh yes we trap electrons to uh to store Reddit po post and CAD videos and um as I mentioned uh because we're trapping uh things in here this is a higher energy State it's technically it's heavier and so you can go look at where I talked about a few lectures ago uh the fact that a Kindle is technically heavier once you've put uh books on it okay now the the part that one of the parts that makes this easier is what's called The Flash translation layer which uh basically says that unlike a dis where we number all the sectors and then the system says I want you know sector 5,496 what happens in a Flash or SSD is there's actually a translation layer so when you ask for a particular number that goes through a translation layer and tells you which block on the flash is actually the current version of 5,226 and as you go through overwriting that from the uh operating system level the underlying flash translation level will keep changing which physical block there is okay okay and so that underlying flash transation layer automatically takes care of wear leveling and making sure we're not wearing out our bits but the question might be is there something we could do with the file system and make that work better okay and um there's firmware that run on ssds and so on and so the question is can we take uh advantage of this information to do something with it and the answer is yes so the flash file system um the f2fs file system which is actually used on mobile devices like pixel 3 from Google it was originally from Samsung um is actually a file system that's been adapted to use the properties of flash um it assumes that this SSD interface which looks like a dis for all practical purposes has underneath it a a flash translation layer the fact that random reads are very fast they're as fast as sequential reads and that random wrs are essentially bad for flash storage and the reason is that if I randomly then I make it a little harder for uh for the underlying flash translation layer to erase Big Blocks because to erase a big block where you have a bunch of random blocks written you actually have to copy the data out of the blocks um onto some pristine ones and then you can erase and so that actually ends up wearing the flash out a little bit more if I do random rights okay and so we're going to minimize rights or updates and try to keep rights sequential and so what they do is they actually start with a log structured file system uh with Co and a copy on write file system made out of it um keeping wrs as sequential as possible and there's a a node translation table to help us keep things sequential and you can for more details you can actually uh check out paper in the reading session section as well um called the f2fs a new file system for flash storage okay um but just to show you a little bit uh the log in the flash file system which I'm showing you here is actually split into a whole bunch of segments and those segments are ones that get written a lot versus ones that aren't written as frequently and so they actually lay out a bunch of different logs to try to manage how how busy the file system area is um there's a translation table um inside the operating system in addition to the one that's on the SSD and they try to classify blocks as being written frequently and not okay and there's a checkpoint operation and so on I'm not going to go into great detail on this but I did want to mention some of these things so if you're curious you can take a look um for instance here is uh an index structure of iodes and if you look at the log structured file system what you see is that if I update a file file data I write that in the log then I've got to write the uh direct pointer block over again into the log then I got to write the indirect pointer and then I got to write the inode and then I got to write the inode maps and so on I got to write a whole bunch of blocks just because I changed some data in the log structured file system and that's because uh I never update in place in the log structured file system I work my way through uh by writing all of the change things into the log while this means that there's a lot more changes and so one of the things that they do in this f2fs is they actually use a second translation table to to uh translate so that the inode for instance at a higher level has a name for this block and that block is in a translation table okay and so they make some interesting modifications to log structured file system all right I'm not going to go into this in any more detail but I just wanted to give you some ideas of what you might go through to try to make things faster okay and to take advantage of the fact that you can do random reads but random wrs are uh expensive and wear the file system out okay all right now time to switch gears um unless there were any additional questions on log structured file systems or transactions or what have you maybe I'll pause for a second while everybody's digesting the thoughts here so in both of f log structured file system and in the f2fs files are just in the log right there is no file system there's no other file system underneath it so log structured file systems are good for rights can anybody answer why the log structured file system might be good for rights it's a good question right the log is sequential so therefore uh it on a dis it goes on the track rather than randomly writing all over and so doesn't matter what your rights are they all go right at one after another on a sequential set of tracks on the dis and so they're very fast because you're avoiding seek time in the uh f2fs the advantage is is a little bit different but you're sequentially writing a whole bunch of blocks so that um when you go back to overwrite them again the uh the log can be erased as a group of blocks can be erased and so it's it uh matches up with with the underlying um architecture of the system so the the log structured file system does lead potentially to fragmentation in the sense that you got a lot of holes in Old parts of the log and that's where Garbage Collection comes into play and so if you take a look at the papers you'll see that what really happens is the log as as time goes on the old parts of the log have more and more holes in them because you've overwritten data that is in those places and at some point you just take the data that's remaining you copy it to a new part of the log and then you uh reclaim everything that was in that old part of the log so it's a type of garbage collection all right good now so Switching gears um if you remember I think the first day I kind of said what's what's cool about operating systems is they are part of this huge worldclass uh system everything from little tiny devices tied into local networks to cars to uh phones to refrigerators and computers and up in through big machine rooms and the cloud and so on all are part of one huge system and um the when I when I think about um when I think about what I'm interacting with on a day-to-day basis I like to think about how the the things I do down at the small scale are actually utilizing resources spread throughout the globe okay and it's amazing when you think about it um sometimes when I think about the whole thing it's it's astounding to me that it all works somehow and sometimes it doesn't entirely work but it mostly works but the interesting question that comes to mind is sort of how do you get all of these things that are spread uh geographically and in domains of fast local connection but really slow longdistance connection Etc get them to all work together and so uh for the last few lectures um we're getting down to the last like five or six lectures here I'm going to talk a bit about um distributed systems and how they can all work together to do for instance distributed decision making which is a topic we're going to start today um and so to start that topic let's bring back some what it turns out to be very old terminology but I thought I'd make sure we were all on the same page here so a centralized system is one in which there's a central component a server of some sort that is uh performing all the major functions and you have a bunch of clients that are all talking to the server and that's typically called a client server model okay and um many of the things that you deal with with your cell phone for instance where the cell phone is one of the clients and something in the cloud is a server that's actually that's actually like a modern analog of this traditional client server uh situation here the question that immediately comes to mind with a centralized server is well how do you scale this I mean what happens if you've got not three clients but a 100 thousand or a million clients clearly one server can't do it okay and so you know we know that in the cloud there are many servers but the question might be how do you structure them to do something intelligent when you've got many components okay now a completely different model is what I like to call the peer-to-peer model with in which um every component in the peer-to-peer model is a peer of the other components so if you notice in this client server model we really had uh the server was kind of King and uh the clients were subjects or something like that whereas in the case of the peer-to-peer model we we have a whole bunch of peers that are all interacting with each other and uh you know you might ask the qu you know in the client server case it's pretty obvious who's responsible for what you get in the peerto peer model it becomes unclear okay but the peer-to-peer model is uh kind of a good starting point for if we want to try to make this server idea uh spread and handle a really high load so for instance maybe we could draw a box around a bunch of these guys working in peer-to-peer mode and treat that as a server okay um so what's the motivation for Distributing in that way rather than having a single client and you know you could come up with lots of reasons right why do people do anything well here maybe it's cheaper and easier to build lots of little simple computers rather than a huge server in the middle or maybe it's e easier to add power incrementally so what I mean by that is if I've got a good peer-to-peer model and I and I need more power I just add some more computers to it right and if things work then by adding a few more servers or whatever now I've got a more powerful system than I started with okay and I can do that incrementally um maybe users have complete control over some of their components so maybe that big peer-to-peer system I've got some that I own and yeah I'm going to help everybody else a bit but I have full control over my hardware and I can bring it back when I want and of course collaboration um is an obvious goal here because maybe by putting together a peer-to-peer model it's easier to collaborate um so the promise of these distributed systems is really that it's they're much more available because there's more components that are likely to be up uh it's better durability so maybe by copying my data to lots of different machines it's more likely it'll survive a crash and maybe there's more security because each piece is smaller and maybe easier to make secure okay now you should be questioning some of these statements here for a moment um the reality uh is typically different okay so this is Lesley Lamport uh he's he's done all sorts of really uh cool system stuff and we'll talk a little bit about um a couple of them uh in the next lecture and a half but uh what he liked to talk about is the fact that the reality behind a lot of distributed systems are actually disappointing so the availability is worse rather than better because it depends on every machine being up he's got a very famous quote uh which is a distributed system is one in which the failure of a computer you didn't know existed can re render your own computer unusable all right it could have worse reliability because you lose data if any machine crashes uh it could have worse security of course because anyone in the world can break into one component and if they're all tied together they've broken into everything so distributed systems have a high promise but you got to be really careful how you use them right coordination becomes very difficult so you got to coordinate multiple copies of shared State information and what would be easy in a centralized system because everybody's going through one Central Computer becomes a lot more difficult when you've got things distributed and of course trust security privacy denial service these are all words that you've heard a lot of but um many new variants of these problems arise as soon as we start Distributing so um can you trust other Machines of a distributed application enough to uh perform a protocol correctly I think there's a corollary of lamport's quote that I like to to think of which is a distributed system is one where you can't do work because some computer you didn't even know existed is successfully coordinating an attack on your system all right that's the standard DDOS so U what are some goals of this kind of system so you'd like transparency which is the ability of the system to mask its complexity remember earlier I said well the way we go from a server system to something that can handle lots of clients 100 thousand or a million is we put a bunch of things together but we draw a box around them and we make it transparently behave the same way as a single computer would okay so we don't have to know about the complexity so what are some transparencies we might come up with well one is location transparency where you don't have to know where resources are located pretty much anybody who's dealt with the cloud has uh understood what location transparency is like perhaps migration so that resources can move around maybe for better performance or better durability or what have you without us having to know uh that they've been moving maybe replication well perhaps I pay to make sure my data doesn't go away and so underneath the cover is a syst system transparently increases the number of copies or maybe it does eraser coding uh transparently in a way that I don't need to know about but makes my data much uh more durable um maybe I don't have to know how many users are out there so uh one of the things that has worked pretty well about the cloud is everybody's kind of interacting Point too between their phone and something out there uh without having to know how many other people are acting with something out there okay and so that level of concurrency uh works pretty well if you're just working uh one to one on something now if you're actually collaborating on something then that gets a little more tricky and so concurrency uh is is problematic under some circumstances parallelism so the system May speed up large jobs by splitting them into small pieces transparently without telling you fall tolerance okay that's kind of like what I said about replication maybe the system is going to hide the fact that things are going wrong um and do so in a way that you still make forward progress okay so transparency and collaboration require some way for different processors to communicate with one another and of course that's going to lead to the need for networks and so on and we're going to talk about networks um in more detail in a lecture or two but for now um I want to talk about uh this idea of decision making being spread across a bunch of nodes because that's kind of the beginnings of how we do this particular thing so um the question about is it a goal for us to not be able to tell where resources are located uh I I would say yes and no I think it's better to think of it as I don't want to know have to know where the Lo where the um resources are unless I care right I'd like the system to transparently adapt them as long as it's within the uh boundaries of my policies and my goals for privacy and what have you I'd like to system to deal with that without me having to deal with it and if I care then another goal would be able to selectively break the transparency to meet some goal for why I wanted to care but then the rest of the transparencies are still there um so it's really the desire to not have to know um and a really important transparency by the way is what happens when a machine crashes that's storing some of your data you don't want to have to somehow go log into your um application and change an IP address uh to point to a different server just because some surger crashed you'd like that process to be transparent okay so think of these uh goals as things that I would like to be transparent unless I care okay um perhaps you think of it as opacity but I think it's really transparency it's uh masking complexity behind Okay so I don't have to know um so how do empties communicate well some sort of protocol so clearly there's going to be um communication through a uh a network of some sort of messages and a protocol is really an agreement on how to communicate including things like syntax how does a communication uh structured and uh specified and semantics about what a communication means so actions taken where uh transmitting receiving when a timer expires Etc okay um the uh so I'm noticing on the chat here so masking equals transparency so that is a funny uh a funny use of terminology perhaps but um You' like things to um be uh invisible to you happening under the covers that's where the word transparent so sort of the you see the functionality uh without having to know what's happening underneath and so that's that's often called a transparency I realize it seems it seems a little little strange but that is a use of that terminology um so for instance um Protocols are often described uh by a state machine on either side so here's an example where I've got two State machines and part of what the protocol is doing is it's tracking the states on both sides so that both sides have the same notion of the state of the world and um the protocol is responsible for making sure that that state is maintained so that if both sides suppose this is separate sides of the world and the state machines are being transparently replicated that's again the use of the word transparent then um then I can act on the current state of the system here say at Berkeley uh in Beijing and I'm and I have confidence that I'm working on the same information as the other side um and so you usually there's some stable storage that's part of this state replication um you could even think of uh a simple example might be that these are two versions of the same file system there's a transparent protocol and the states represent the state of the file system and it's keeping things in sync okay so that's another example of uh of a good protocol okay and so um you know we want among other things stability in the face of failure so even when parts of the system are failing or the storage falls apart in one place but it's there's still storage in other places we'd like the state machine replication to uh continue to work properly it may be that endpoints uh are selectively failing but if I were to vote let's say among the states of all the different uh participants so suppose I've got three participants and one of them fails a voting process could uh maybe be employed to figure out what the real state of the system actually is and we'll talk about some of this in a moment so examples of protocols in human interaction I mean I thought I'd put this down just for the heck of it you know you got you got a phone you pick up the phone call somebody you listen for the dial tone okay so maybe you don't do that on a cell phone but see you have service you uh you dial the number you hear ringing and the colag says hello you say hi it's John or U hi it's me um that's my favorite kind of goofy introduction it's like well what's that about who's who's me um but then uh you kind of say uh Hey do you think blah blah blah blah blah and they say yeah blah blah BL blah blah and you say goodbye and they say goodbye and you hang up now uh this is probably a conversation that you had um late at night sometimes including the blah blah blah blah I know I've had a few of them myself but really you're thinking about a protocol because there's a protocol which goes from ringing to answering at the other side uh to responding so the answer comes back and now you know that that connection's been uh set up uh or the caller says something the collie responds with a response and then there's some process for hanging up and so this protocol of synchronizing the states between the person that made the call and the other person is is a human interactive version of uh what we would like to do in our protocols okay and um the problem is you know there's many pieces of Hardware this has been our standard issue throughout this whole term where we talked about the fact that Hardware is vastly different uh at you know at the io level and so how do we deal with that and so if you look here uh when we're talking about communicating we have a bunch of applications at one level we have a bunch of ways that things are communicating you know maybe a coax or cable or fiber optics or whatever and the question is um the the many different applications have to communicate over a bunch of different media and um there many different styles and what do you do well you don't want to make a point-to-point uh application where Skype talks uh one way through coaxial and another through fiber optic and another through Wireless and so on because you're going to very rapidly get nsquared uh blow up in complexity right so for instance we added some new application like HTTP it shouldn't be the case that we have to write a new communication module for every type of thing that we're going to communicate okay communicate to and similarly if we come up with a new uh way to communicate like a packet radio or something we don't want to have to do an N squared uh communication between every application and every new communication media you know this this looks silly when you think about it but clearly there's a level of abstraction kind of like our device drivers uh that needs to be employed here and if you takeen uh you know if you take a networking you certainly know what that's about right so how does the internet avoid this well we put a layering in here okay we put intermediate layering and um a set of abstractions provi providing Network functionality and Technologies and so as a result new application uh that we add on here like HTTP really has to figure out how to communicate with this intermediate layer which is often called the narrow waist of the Internet Protocol um looks kind of like an hourglass um and you know if I put some new communication technology um I B basically have to figure out how to match the intermediate layers to communication technology and I've just made my problem much simpler because of abstraction okay and of course this is the typical hourglass that everybody sees when they take an an IP class networking class um where IP is the protocol of choice at the narrow layers it wasn't always that way but it's became become that way and now all of the layers above have to just send IP packets and all the layers Below have to uh communicate IP between different sites and if we do that then we basically have the internet okay and um it's astonishing how well this has worked uh to to basically connect a whole bunch of devices and computers and storage and everything simply by standardizing uh IP in the middle here okay um so what are the implications of this hourglass so there's a single internet layer module that's the IP protocol allows arbitrary networks to operate uh any technology that supports IP can exchange packets it allows applications to function on all networks so applications that can run on IP can use any network um it supports simultaneous Innovations above and below so um you know you can do all sorts of stuff to the above the application layer uh you can do all sorts of stuff below the the physical layers you can have many different physical layers but changing IP itself has turned out to be very challenging so um there's a funny story about IPv6 which has been the you know the next IP protocol for the last 20 years um only in the last I would say five years has it really taken a hold and started to become a reasonable protocol um it's been very hard to swap out ipv4 which is the traditional one with IPv6 because it had been so uh embedded in the world um so some drawbacks however of layering are the kind all of the drawbacks that you could imagine especially now that you've been through 162 so you know layer n may end up duplicating stuff that layer n minus one is doing or layers need a bunch of the same information so you end up communicating a bunch of information up and down the layers and you got a bunch of memory copies and it's expensive um layering may hurt performance well that you know the any API could potentially be made Faster by flattening the API out uh but then again you know if you do this the wrong way you end up with this N squared communic or N squared pattern again and that's not a good idea right so there's this tradeoff between performance and uh and apis and and layering and it turns out that without IP that's been an extremely powerful tradeoff okay now um the what I'd like to talk about is uh the end to end argument um there was a hugely influential paper which again is on the resources page um by sulzer reeden Clark from 1984 um so I realize that's ancient history now but it's one of these papers it still has uh some very important philosophy uh in it that I think I want to make sure everybody gets here so it's the some would call it the sacred text of the internet um there's been endless de debate sorry talking too long endless disputes about what it actually means um Everybody cites it as supporting their position um you know uh you could imagine that that's true of pretty much any good document uh that lots of people read um they'll get into philosophical arguments about it um the message however is pretty simple which is that some type YP of network functionality can only be correctly implemented from end to end uh and things like reliability security Etc are examples of such okay and because of this the end hosts can basically satisfy the requirements without the network help and therefore um and they must do it anyway and therefore you could imagine that the network didn't have to do that okay so the the way that this paper ends if you go to read this is basically that you don't have to go out of your way to implement stuff in the network because you got to do it at the endpoints anyway all right and the simplest example here um that they give which I think is very uh telling is the idea of you got two hosts and host a has a file they want to send to host B and of course you've got uh applications for the file transfer you've got the operating system you got networks Etc all of these are parts of that and you might ask yourself well how do I transmit well um you know the application reads it off the disk uh it sends it to the operating system which then you know sends it out of a socket um which then comes up the operating system the other side um goes into the application which then writes it to the disk and the question is how do you make that reliable well one option is you make everything reliable okay so you make it 100% reliable that you load the file off the dis and then 100% reliable that uh things get trans transfer from the application to the OS so that transfer might not be so bad but then you got to somehow uh make sure that when it goes across the network um every link that's in the middle so if we're transmitting from Berkeley to Beijing uh there's a whole bunch of other things there's transatlantic cables you know there's a bunch of Hops at different levels and there's a lot of uh detail in this link that we're not talking about right now and we'd have to make sure that every link was 100% reliable okay and that way we compen everything together and we get 100% transfer okay except it never works that way right it's very hard to make something 100% reliable and furthermore it's still possible that um you missed something and uh one of the things that is uh is interesting about that paper is they relate a story from uh 1984 in which uh they were transmitting um copies of the kernel source code from one host to another and it was only going across a few buildings or whatever but there were a lot of hops in between and they were carefully check summing and catching every hop along the way to try to make sure that this was never screwed up except what they didn't realize with the was that in some of the routers along the way um even though each of the links were carefully check summed and made to be reliable the routers actually had a bug in them that would uh I think it would transpose bits every million bytes that it transmitted in memory because there was a bug in the source code of the router and as a result even though they check summed everything along the way the data got slowly corrupted and the uh the colel had been transferred back and forth across these links a coule couple of times many times and as a result the data was slowly getting corrupted okay we used to call that bit rot all right and it was totally unexpected and it uh things got so corrupted they had to pull things back off of tape in order to fix it okay so this idea of making things uh reliable by fixing everything in the middle is uh not only very hard it might not be the right thing so what's the other option is you take it from uh point a and you transmit it as well as you can to point B and then you check at the end you say well did I uh did I get the file that I was expected and so I compute a a hash or a check sum at one end I send it to the other I check it out and either I've got the file or I don't and if I don't then I can retransmit okay and so what's good about this end to endend approach is it actually makes up for all sorts of problems in the middle by catching uh bad transmission okay now of course what's pointed out in the paper is uh if you've got a A 1 kilobyte File versus a versus a you know gigabyte file the problem is the more data you're transmitting so the gigabyte file is more likely to fail in the middle than the one kiloby file and so if you have a really large file and you wait until the very end before you check Summit you're going to have a lot of failures before you succeed and in fact it may take a very long time and so that's why you want to break things into chunks and uh sort of individually check and add but the point of this uh example is that if things have to be done at the end points then maybe you don't need to do them as carefully in the middle as you might otherwise okay and and then as a result any reliability you might do in the middle is really for per improving performance okay now um so the second option is basically uh saying well here's the check sum of what I got it goes back and as a result um you pull the file off the dis and this application the original one checks it and sees whether uh you're good to go okay now um solution one as I said was incomplete because if the memory is corrupted the receiver has to do the check anyway solution two is complete because you you had to do it anyway and so um is there any need to implement reliability at all at the lower layers okay and the end to- endend argument by the way if you know anything about the history of the internet is kind of what was used to justify the um the structure of the basic internet as it is right now which is a datagram service we'll talk more about that um in a lecture or two where packets of of small size are sent across and they either make it or they don't but um uh we don't worry about that because we're checking everything at the end to end okay and so this paper and the end to end uh philosophy in general was kind of the reason the internet's the way it is now um it could be more efficient though to do something okay so as I mentioned yes we could just send the the uh the data to the other side and hope it gets there and retransmitted if it doesn't but at some point that might be too expensive to keep retransmitting if I had a really bad Link in the middle and so there's a performance reason for improving things in the middle but there isn't a functionality need to improve things in the middle and so this discussion leads to a trade-off of about how much work do you want to do in the middle okay so implementing complex functionality in the network doesn't reduce the host implementation complexity because you still got to do it and it does increase the network complexity probably gives you delay and overhead on every application even if they don't need it so this is kind of arguing that maybe you don't need to do something in the middle if you have to do it at the ends okay but implementing things in the network can enhance performance in some cases like very lossy links now uh what's interesting is a consern conservative interpretation of the end to end argument just like there's always conservative and liberal interpretations of pretty much anything could say well don't bother implementing it at all at the lower level unless it can be completely implemented at that level and doesn't need to be in the end points um or unless you re actually relieve burden from the host don't bother a modern interpretation or a moderate I like to think of moderate as well is basically think twice before implementing something in the network if the host can do it correctly then um implement it in the lower layers only if it's going to be a performance enhancement uh or has a good justification and only do it if it doesn't impose burden that uh on apps that don't need it okay and this is the interpretation that I always use and that I suggest in this class and you might ask well is this still valid uh and there are some instances where this particular monitor interpretation is in fact uh not even quite enough okay which is what about denial of service so somebody is going to attack a communication stream from outside there might actually be a pretty good argument for um putting firewalls and check sums and everything on intermediate links to basically prevent the denial of service so in that instance uh even though the end to end communication still has to happen you're enhancing the overall path in the middle by putting functionality in there or privacy all right if I want to prevent privacy putting firewalls in the middle Mak sense okay or maybe there's things that have to be done in the network so certain routing protocols which pick paths from point A to point B have to be done in the network they can't really be done too well end to end all right so how do you actually program a distributed application so this is going to be our topic for next time you need to synchronize multiple threads running on different machines uh there's no shared memory there's no test and set so all of the stuff that we talked about earlier in the term really isn't quite available to you in this simple view of the world which is a bunch of messages I send from one thing and I receive on the other so there's one abstraction over the network um it's already Atomic so no receiver gets a portion of the message because typically we check some things and so if a bad message goes through we stop uh we throw it out and retransmit so the interface is sort of like a mailbox where the sender directs a message at a receiver's mailbox as a temporary holding area at the destination um and we have the idea of a send of a message to the mailbox and a receive which is blocking often to wait for a message to show up now what we're going to do next lecture is we're going to say can we take this basic idea and can we build something interesting on top of it that will allow us to build these distributed applications will allow us to do um to synchronize State machines amongst uh multiple machines and ultimately lets us do pretty interesting uh distributed peer-to-peer style applications so that'll be for next time so in conclusion um I brought back this idea of the ilities okay availability is how often is the resource available durability how often is it preserved against faults reliability how often is the resource performing correctly we talked about preserving the bits so I like to think of eraser codes AR raid as preserving the bits copy on write is about preserving the Integrity not the bits so with by copy on write I make a bunch of changes that are new by not overwriting anything but rather sort of using pointers to the old data that's copy on write and that allows us to uh basically preserve the Integrity of the old data even while I'm changing it uh we talked talked about how logs can improve reliability um we talked about Journal file systems such as ext3 and NTFS as similar and in general we talked about transactions over a log as a general solution um and hopefully uh the examples that I gave there worked out well we talked started talking about protocols between parties uh that will help us build distributed applications we spent some time with the end to end argument which will hopefully uh inform us as we go forward and next time we'll start talking about distributed decision making such as two-phase commit um didn't quite get there this time but we'll definitely do that next time so I'm going to say goodbye to everybody I'm sorry for going over I guess I've been doing that a lot this term my apologies but I hope you have a good evening and we will see you on Wednesday |
CS_162_Operating_Systems_and_Systems_Programming_Berkeley | CS162_Lecture_10_Scheduling_1_Concepts_and_Classic_Policies.txt | welcome back to cs162 everybody we have some brave souls that are here uh on the night before the exam so that's great um today we're going to uh briefly finish up something we didn't get to last time and then we're going to dive into a new topic which is scheduling so uh however if you remember from last time among other things we did a lot of talking about the monitor paradigm for synchronizing and a monitor is basically a lock and zero or more condition variables for managing uh concurrent access to shared data and monitors are a paradigm that we talked about and some languages like java actually provide monitors natively for languages that don't have it natively then you can get a condition variable class that integrates with a lock class and you can program with the monitor idea still and so a condition variable is kind of this key idea which is a queue of threads waiting for something inside a critical section so the key idea here is you allow sleeping inside the critical section uh by sort of hiding the fact that you're going to atomically release the lock and and reacquire it and uh this is in contrast of semaphores of course where you can't wait inside the critical section and we talked about uh the operations there's three main ones they have different names depending on the implementation but uh weight signal and broadcast are are the basic ideas there and the rule is always you have to hold a lock when you do any sort of condition variable operations and so um this was the general pattern that we talked about for mesa scheduled uh convinced condition variables okay and basically the idea is that you typically grab the lock you check some condition and potentially wait if uh if the condition's not succeeded because it's mesa scheduled when we wake up from the weight we always make sure to recheck the condition so that's the while loop and uh really that's because it's possible that somebody will have gotten in and made the condition invalid again before we actually got to start running again and then you unlock you do something and then there's always a closing condition where you grab the lock again you maybe do some other things but eventually you're gonna signal to make sure that anybody who is waiting is woken up and then you unlock the key idea here is to be thinking uh that you're always within the critical section you grab the lock here you release the lock there and all the code that you see as a programmer between the lock and unlock are always considered to be executed with locks on okay and of course obviously when you actually go to sleep under the covers it releases a lock and then reacquires it before you start up again all right and then we spent a fair amount of time last time using this pattern to solve the reader's writer's problem so i wanted to see whether there were any questions on monitors before we kind of move on any remaining questions at all okay good so um the thing i wanted to finish up last time so i was uh at the very end of the lecture kind of showing you some uh you know examples of scheduling of threads within the kernel here the one thing i did want to do was talk a little bit about the i o portion of the i o interface because we didn't talk about that earlier in the term and if you were really to look at the layers of i o that we've talked about there's this at the user application level you might execute a read which is a raw interface level that translates directly to a system call inside the lib c library that system call basically marshals the arguments together into registers calls assist call and then the results of the syscall uh are then returned from this read system call as if it were like a function it really is kind of like a function that calls the kernel you're getting familiar with this now with project one of course and then inside uh that system call is a type of exception but uh the same thing happens if you have interrupt processing but that goes into the system call handler where you unmarshal or take apart the uh arguments and then you dispatch a handler based on what you're trying to do so it might be dispatched based on the fact that this was a system call uh to a read for instance and then you marshal the results back and you return them from the system call and then one layer deeper when we dispatch on this read we might actually call something called vfs read inside the kernel which takes the description file uh structure for that file and a buffer and a few other things and it basically does the file access maybe calls a device driver in the process so i wanted to look a little deeper in these lower layers here just so you've seen them once or twice is going to be useful as we get further into project two and three you should know there's many different types of i o but as we were talking earlier in the term the unix way or the posix way basically treats all of these like file i o okay and so that system called interface read write open close uh basically translates into calls across the system called boundary in the kernel that's this blue thing right now and depending on what we're calling whether it's say a file system actual uh storage in block devices or maybe it's device control like a serial link that's what the ttys are keyboard mouse or it's network sockets the same open close read write system calls are used and the question might be well so how does that work okay and that's the magic of standardized api so the magic of the standardized api is arranging so that all of these very different things can all be accessed as if they were files all right and the you know internally there's this file description structure which represents an open device being accessed like a file when we return a file descriptor as an integer as you recall that file descriptor that the process knows about is mapped as a number is mapped inside the kernel to one of these structures okay so this is the internal data structure describing everything about the file we haven't really talked too much about it we mentioned it once before and you've probably strayed into it the equivalent of this in pintos now and uh it talks about everything having to do with that device like where the information resides uh its status and how to access it and it's um you know in the kernel typically what gets passed around is a pointer to this uh file structure it's a it's a struct file star and everything is accessed with that file structure and uh you know we can't give that file structure to the user why is that why can't we give that pointer up to the user so anybody know why that because it's in kernel memory exactly so those addresses don't mean anything to users and the capital file stars we've talked about are different from this the capital file star represents buffered user level memory buffering a file this structure represents the actual internal implementation of the file and you see something here which we're not going to talk about today called an inode so when we start talking about how file systems are implemented the inode is going to come up a lot but that inode can point at a wide variety of things not just file blocks and the thing of interest here for today that i want to talk about is this file operations uh structure uh which is the f underscore op uh item in this file structure and that basically describes how this device implements its operations so for disks it points to things like file operations in the file system for pipes it points to pipe operations i noticed there were some questions on piazza about well how does a pipe get implemented in the kernel where is it well it's a it's a cue inside the kernel and how do you access it well you access it because the file structure is uh pointed at by um excuse me the file structure points at both the queue in the kernel and has a set of operations on how to read write etc for the um for the pipe all right and for sockets it points to socket operations okay so the cool thing about this is really that by putting this layer of indirection in here um you basically get the ability to have everything look like a file from the user's level okay and all of the complexity is buried in this simple idea this simple interface so for instance here's an example of what that file structure looks like for those of you out there so here are the set of operations that are standardized things like seeking reading writing bytes there's uh asynchronous i o reads and writes et cetera okay how to open a directory or read a directory how to open how to flush and so on these standardized operations are the ones that devices that want to look like files to the user have to provide um all right and uh you know so for instance i think i um i don't know if i had this slide or not but for instance for a pipe uh there are two different file structure uh that are hard coded in the pipe implementation for the file operations one for the read end and one for the right end and things like the read end uh don't have right calls and things from the right and don't have read calls okay so this vfs read that i showed you the virtual file system read we'll talk a lot more about this in detail later in the term but uh what it's got for instance is uh this is where the read system call goes and it says for instance it takes the file star in okay and then it says read up to count bytes from the file starting at the position uh that's kept out in the file star into the buffer that's given here and it returns an error or the number of bytes read um here's an example of something that says well if we're trying to read uh do we have read access or not okay um check if the file has read methods right if we try to read something that doesn't know how to read then we're gonna fault so a good example that would be trying to read from the right end of a pipe for instance um and the other thing is that's very important that you've uh started to do uh in project one or probably should have been doing already is uh check the user's buffer is actually accessible to it okay and if the user doesn't really have access to the buffer he wants to put things in um then uh basically this will fault okay and then um you know can we actually read from the file that's another question range and then we check and if there's uh read operations we do it uh using the the specific read operations otherwise we do what's called a synchronous read which will use the provided asynchronous data or asynchronous operations to read from it and then um when we're done we notify the parent that something was read so this is sort of when you see file browsers that you have open on the screen and you create a new file you'll suddenly see the file will pop up in the file browser while there's notification going on inside the file system we'll update the number of bytes read by the current task so this is sort of scheduling information we'll talk about schedulers in a bit for cpu cycles but it's possible that um it's probable that we're scheduling also the number of bytes we're pulling off a file system and we may choose to uh suspend a process that's reading more than we desired from at a given time and then we update the number of read calls again some statistics and we return okay so the idea here is that everything at the top level has been designed in a way to be easily plug and played from a wide variety of devices underneath all right any questions on that all right so um underneath the covers even further below than what we showed here let me just back up oops if you look here um i wanted to show you what i want to show you oh yeah so in this um many different types of i o we have the high level interface which we've just been talking about and then at the bottom we have the devices and somewhere in the middle we have to have things that know how to interface with all the unique characteristics of the devices and provide enough standardized interface up to interface with the the kernel okay and those are as you probably are well aware called device drivers so going back to where we were here so what a device driver is is it's driver device specific code in the kernel that interacts directly with the device and it provides a standardized internal interface up which is not surprisingly going to be very close to those file operations i showed you earlier and the same kernel i o at that lower level can easily interact with different devices so the device driver level is going to give us the ability to interact identically with say a usb key that you might plug into a usb port versus a disk drive that's spinning they have the same kind of interface in the kernel after you get out of the device driver okay and the device driver is also going to provide the special device specific configuration from the ioctal system calls we gave you a good example of well if you've got a device that yes it does open close read write but there's some special configuration that doesn't fit into that standardized interface then you'll typically use the iocto calls for that device to set things like you might set resolution on a display or or baud rate on a serial link and the device driver typically has two pieces to it there's a top half and a bottom half the top half of the device driver which is uh interfacing up into the kernel is accessed in a call path from system calls um and it implements uh standard cross-device calls these are going to sound very familiar to the to you from the f ops we were just talking about earlier this is a slightly different layer a little bit lower but it also supports open close read write ioctyl it also has a strategy routine which is typically the way that you start i o starting and so for instance if the file system design decides that there are some number of blocks that need to be pulled out of the file system then uh once that's been determined then the strategy routine can be uh put together to start the actual i o happening so that's the top half and the the thing about the top half is that uh processes that are running or threads can go all the way through to the top half and if the i o doesn't actually have to happen they can return from that and return to the user but if the i o happens to have to happen to a slow device then the top half is potentially going to put things to sleep okay and the bottom half runs pretty exclusively as an interrupt routine okay so it gets input or transfers the next block of output based on interrupts that have occurred and it may wake sleeping threads in the top half if the i o is now complete okay and so here's an example of a an i o request coming from the user where the user wants to let's say do a read so the user program at the top here is going to request some io that might be a read system call or it might be an f read or whatever in the buffered i o and um that's going to cross into the kernel so that's doing a system call and the first thing that kernel is going to do with a say a disk drive file system situation is it's going to say well can i already satisfy the request okay and if the answer is yes then it will immediately copy the results into the user's buffer and return up can anybody tell me why the kernel might be able to immediately satisfy a request from the user what what might what might be the reason that we could say yes cash yes so for buffer so it's quite for the simplest way to think about this is if i execute read uh 13 bytes at a time to a file system the disk is transferring blocks that are 4 to 16k in size so my 13 bytes may bring in a whole let's say 4k at a time and that'll be put in a cache and there's a couple of different places of cash we'll talk about as we get further into this later in the term uh but that cache is going to hold the whole 14 kilobytes and so um the first time that may return that and take a little while to go to the disk and return to the user but the next time for the next 13 bytes it's already in the cache okay so um but assuming that's not true then we're going to we realize we've got to do some actual io to an actual device so this is the point at which we're going to try to send the request to the device driver and potentially put the the uh process to sleep at that point okay or the thread and um when we get uh to the top half of the device driver it sort of takes the request and it figures out how did that translate to the particular blocks that need to be read off the disk um and when we talk about file systems you'll get an idea where the you know how we figure out which blocks and then it's going to put that thing to start the start the actions with the disk drive and then put things to sleep and that's going to be the place where for instance the strategy routine takes over but uh at the point that the top uh the device driver is done and has send the request to the device it then puts the thread to sleep so you could say that the thread kind of started at the user and worked its way all the way down to the top half of the device driver and it's now sleeping okay and meanwhile um yeah that's a really good question does the device driver have its own process not exactly so notice that everything i've talked about here is kind of running in uh response to the processes request of the system call and so it's running on that processes kernel stack and eventually we get to this point where we can't go any further and what happens there is that kernel stack or kernel thread gets put to sleep by being put on a weight cue with the uh with the disc and then another thread will be scheduled to run okay but the the thing that has run up to this point has actually been the kernel thread of that user process now you might another reason you might say whether it has its own process is okay so nothing's running anymore what happens well we've asked the device to go ahead and start executing so the device is off doing its own thing and eventually when it's done possibly transferring the data into memory through through something called dma we'll get there again a little later in the term uh it will generate an interrupt to tell the uh the system it's time to to wake up and that interrupt is gonna go to the bottom half of the device driver which is now going to figure out uh who might be sleeping waiting for that block in which case it's going to determine who needs the block and it's going to wake up that process and from that point the process now the original process which uh you know was a kernel thread put to sleep now wakes up and says oh let's transfer the data to the processes uh buffer and then return from the read system call okay so any questions on that now there's a lot of interesting details in here which we haven't talked a lot about yet but for instance the uh process of sending no pun intended the act of sending requests to that device it's possible that the device driver is going to take a set of requests from a set of different processes and reorder them uh using something for instance often called an elevator algorithm so that uh the the requests do the uh the least amount of head movement okay but that's that's a topic for another day right now we're just talking about the device driver okay so uh that's what i wanted to say about device drivers i think the important things to get out of this for now is this idea that it's the kernel thread associated with the requesting thread that uh is gets put to sleep and assuming that uh we have a one-to-one mapping between the user thread and the kernel thread uh and there's a post that i did on that last night in piazza as well then putting the the thread to sleep here in the the i o is okay because all the other threads have their own kernel threads and get to keep going on the other hand if we had a bunch of user threads running uh in this environment let's say if any one of them were to do i o that got put to sleep then all of those user threads associated with that one kernel thread get put to sleep okay so that's uh that's the danger of sort of multiple of the many to one model where multiple user threads are associated with a single kernel thread okay good so uh if you're not familiar with it i will make sure you know tomorrow is a our first exam uh it's five to seven um has been stated um we have some special uh dispensation for 170 folks this is video proctored i know uh there was a lot of people that were worried about having to prop their cell phone and so on so there's been a bit of an update on what we're uh asking of you but we definitely do need the webcam uh turned in and you logged into zoom with screen sharing while you're doing this and uh the more details are up on the pinned piazza link so hopefully everybody's got that figured out now um the topics for this exam are pretty much everything up to monday i know we originally said everything up to uh lecture eight but really lecture nine was mostly a little bit of a continuation of lecture eight we spent some time um looking at monitors in a little bit more detail so um scheduling which is the today's topic for the rest of the day is not part of the exam so don't worry about that um homework and project work is definitely fair game um so the other thing is the materials you can use during this exam this is closed book close friends close random internet people no open channels with folks you're allowed to use your brain yeah you can use your personal device drivers which hopefully will drive your fingers properly when you're typing answers in you can have a single cheat sheet that you've written both sides and you can eat and a half by 11 hand written and i think that's it any questions our device drivers in scope uh you're um they're not gonna be in scope on the exam so things things from today that didn't get talked about uh on monday are not at school but you're like you're allowed to use the device drivers in your fingers as well okay any other questions yes so there will be a sheet at the end i think there was even a piazza post showing you what it was like but uh there will be a sheet with important system calls and function call signatures that you need to know um but it won't hurt to it wouldn't hurt to know the ones that you've been using a lot in your project scope just in case we forgot to give you one so all right um and things that like thread creation and so on if you don't remember the exact um signature and we didn't give it to you then uh do your best to come up with the uh the signatures that you need i think we've been pretty good about that but if we forgot something uh you know you know what the signature should look like pretty much because you've been using it so i think you'll be okay if you've got some arguments slightly reordered uh that's okay all right but most of the things that you need signatures on we will give you um for system level things okay now if there aren't any more questions about the midterm let's go on to topics for today applause one last time here how hard is it uh just perfect uh perfectly not too hard not too easy just the right amount for the time that you have to do it i don't know uh so um i would say use your primary monitor okay all right um if uh in terms of printing the exam it's an online thing unless you've talked to us uh and have special uh have some special arrangement so it's it's gonna be an online exam you'll be able to put your answers in and when you save on an answer it holds on to it for you so there won't be any losing of answers or anything like that all right okay and all those details should be posted already so you should take a look make sure you're ready with any setup in terms of being able to log into getting your zoom set up and so on you should probably try that out tomorrow or tonight whatever just to make sure you're ready okay today i want to switch topics a little bit but it isn't really that big of a switch we've been talking about uh how the mechanisms for going from one thread to another okay and we talked a lot about that over the course of the last you know nine lectures and the one thing we didn't really talk about is how do you decide which thread to switch to okay and that's uh that turns out to be an extremely interesting topic so you know i'm showing you a loop here this loop kind of represents everything there is about a thread or excuse me everything there is about the operating system which does a continuous loop and it basically says if there are any ready threads pick the next one run it otherwise run an idle thread and the idle thread is typically just a kernel thread that does nothing keep some statistics and we just keep doing this over and over again and the question about how frequently do i loop and when do i cut somebody off and pick somebody else or how do i choose between the the 12 things that are ready to run that's all scheduling and it's interesting scheduling because there are many policies many different reasons for choosing one thing over another and so scheduling is actually a really uh a deep topic that we um spend a couple of lectures on because it's interesting and another uh figure that i showed you a while back is this one which is kind of showing the cpu busy potentially exiting or executing stuff and every now and then something happens so that the current thread that's running in some process has to stop okay and examples of that are for instance um if we do some i o we were just talking about that a moment ago where the cpu enters the device driver to do some i o and the device driver puts it to sleep you get put on the queue with that i o and you've got to wait for the i o to happen so that's an example of the cpu uh relinquishing the thread that's running and putting that that kernel thread on uh on a queue and of course the thing that has to happen immediately after that is well let's go to the ready queue and pull somebody else to run because we don't want to waste cpu cycles and the other thing that's kind of interesting here is we're busy running along and the timer goes off and at that point we say that the time slice has expired and we put that item um that thread that was running back on the ready queue because its time is up but you know we got to pull somebody off the ready queue to put it on the cpu okay and then of course when we're doing fork or we're doing some other uh scheduling or so on excuse me some other synchronization that requires interrupts uh we can be pulled off the cpu as well and the real question of scheduling is how's the os to decide which of several tasks to take off the queue and um you know it's uh if if if you didn't learn about this you might easily say well this is dumb you just pick the next one uh why is that interesting well the answer is that picking the next one is is uh rarely the right thing to do you can have what's called a fifo scheduler and of course we'll talk about that one first but that's not the best thing to do um there may be many other things where you got to pick the one that is uh got highest priority for some reason or you pick the one that's more likely to make you typing at the keyboard happy because your keystrokes get registered okay and so scheduling is this idea of deciding which threads are given access to the resources moment-to-moment and for certainly these next couple of lectures we're going to be talking about cpu time is the resource we're talking about but in fact very interesting scheduling happens with respect to disks we could say well i've got a set of tasks and i want to make sure everybody gets equal bandwidth out of the disk drive okay so that's a scheduling a priority of requirement okay for policy um but today and you know and next time we're definitely going to be talking about the cpu okay so you know we're all big fans of uh cues of various sorts um and uh here's a fifo q that looks like uh they're not social distancing so this is uh a picture from a little while ago um so what are some assumptions well in the 70s maybe this is a picture from the 70s so in the 70s scheduling was kind of a big area of research computers were new enough that people hadn't really figured things out and the the usage models were were pretty basic because people had mainframes and big rooms and those were multi-million dollar machines and you had a bunch of people using them and so you had to somehow make sure that those million dollar super million dollar resources were properly shared among different users because they were just expensive and you couldn't let a user take too much time but you also couldn't let a user who's maybe spent money for computer time be upset uh because they're not getting their fair share and so the thing that's interesting is there's a lot of implicit assumptions in these original cpu scheduling algorithms of things like the following one program per user okay or one process maybe per user one thread per program programs are totally independent of each other um these kind of ideas are certainly not the case anymore but they're a good place to start when we sort of dive into scheduling um so these are a bit unrealistic but they do simplify the problem uh so it can be solved initially so for instance the question might be what's fair um is it is fair about fairness among users or about programs so uh if you think about it if you have one user and uh they have five programs and a different user has one program how do you share the cpu do you share that by cutting it in sixths and giving one sixth to each uh each program so now the user just by having multiple programs gets more of the cpu than the user who only has one program okay so that's a type of fare which is fairness per process but it's not fair per user right um you know if i run one compilation job and you run five then you get five times as much cpu as i do is that fair i don't know um the high level goal of course is still doling out cpu time because when we do this swapping from user one to user two to user three to user one to user two you two three or we earlier we saw these were threads or their processes we need to decide which one's next and we got to do that with some policy in mind and the interesting thing hopefully you'll figure out by the end of the lecture is there pretty much isn't one policy that goes well for every situation and it's it's often the cases where people have tried to come up with a single scheduler that work to work across a wide variety of platforms that things have gotten in trouble and not worked well for on any of the platforms so one thing that we might use as a model for this here's an interesting idea which is burst time or cpu burst and what you look here on what you see on the left is the idea that we run for a while and then we go to wait for some io and then we run a little longer and we wait for some io and we run a little longer and we wait for some io and in each of those instances um during the waiting of course we're on some queue okay because we can't run and so we're off the ready queue and pretty much other people get cpu time and so the execution model is really that programs are alternating between bursts of cpu and bursts of io and what's interesting is if you were to measure that on some system this is totally unnamed here for a moment and you were to put burst time on the x-axis and how often you see a burst time of that size you see there's a peak at the lower end but a really long tail okay now just to be clear in case you're wondering again what do i mean by this x-axis i mean that if you look at this thing on the left and you say i run for amount some amount of time before i go to sleep that some amount of time is is on this x-axis okay and so some of them are really short like for instance if i type what does that do i type a character it generates an interrupt there's an interrupt routine that's run that character maybe gets forwarded through the windowing system to a to a thread that's waiting for it and then that thread goes to sleep waiting for the next thing and that might be a very short burst because the amount of work that happened for typing a character is short on the other hand certainly you have some processes that might be running for a long time like computing the next digit of pi they're going to be way out at the end of the long tail and so if you were to look at the set of tasks you'd find that there's a it's weighted toward the small burst because there's a lot of those little ones and furthermore you might infer that those little ones have something to do with interacting with users okay so programs typically use cpu for a period of time and then do i o and then keep going and scheduling decisions about which job to give the cpu to uh might be based on burst time can anybody think what might be a policy that we should use based on burst time for scheduling cpu what might be a good one and can you think of a reason okay give the long duration first okay that would be one policy so why so if i gave the long durations first that means that things that are short duration are potentially waiting a very long time right so the long duration first might give me a lot of efficiency because i'm using the processor cache as well but it's going to really be a bad impact on the short burst ones because they're just going to run quickly and finish right so okay we could go sudo randomly sure can anybody give a justification for why we might want to optimize for the short ones what might be a good reason to try to optimize for the short bursts so we have maximize the number of processes that get to go in a fixed time that could be one idea there you go i saw somebody that said for responsiveness there you go because if those low bursts are about interacting with users and they're short i want to handle them quickly because the users get the big benefit of seeing their you know they type the letter a and then it shows up on the screen quickly and the long ones are hardly going to notice if you uh hold them up for a moment to run a short one okay and so really optimizing for short bursts may have something to do with responsiveness and i will say something that will surprise you a little bit but maybe you run into this in shared resources but back when i first started writing papers uh and i was using a mainframe to do it there were a bunch of people that logged in and we had such a high load with so many people logged in that when i had a emacs up and i started typing uh you know you might type you know to be or not to be that is the question and then a second later or three seconds later the whole the whole phrase would show up so the the scheduling was so bad that you could type whole sentences and then it would pop up on the screen okay and so that's not very responsive and you can imagine as we've gotten more and more in a situation where people are um you know have cpus of their own so they have lots of cell phones and other things you're going to want to be cognizant of responsiveness okay so so what are some goals that we might have for scheduling so for instance uh and yeah the sum of the waiting time is going to be the smallest that's going to be a metric that we're going to use in a moment so that's very good so um minimizing response time okay we want to minimize the elapsed time to do some operation or job and um the response time is really what the user sees it's the time to echo a key stroke in the editor it's the time to compile a program and for real-time tasks which we're not going to get to real time today we'll do that next time but you have to meet deadlines imposed by the world so uh my my favorite example of this is uh most cars these days have hundreds of little miniature cpus in them and i want to make sure that when i slam on the brakes in my car that uh the system is responsive and applies the brakes quickly and timely so that i don't smash into whatever i was trying to avoid so there's a real-time deadline in that scenario where it's just it's not just keeping me happy as a user it's keeping me alive right so that's we can get into real time where the deadlines are far more important than the kind we've been talking about up until now we'll talk about that next time another scheduling thing which tends to be in the big cloud services is maximizing throughput so this is completely different than minimizing response time maximizing throughput is about saying i want to maximize the number of operations or jobs per second and the throughput is related to response time but it's not identical and minimizing response time leads to more context switches because to minimize response time what's happening is i'm handling a whole bunch of really short jobs and then a long one for a while and then a whole bunch of short ones and as a result i'm context switching a lot and as you know contact switching has an overhead associated with it and so i'm actually not getting the maximum throughput when i'm contact switching okay now there's two parts to maximizing throughput one is minimizing the overhead which is not context switching much and the other is very efficient uh use of resources like the cpu disk memory etc and a key on this which uh is something to start putting into your viewpoint here and thinking about is by not contact switching a lot not only do i avoid overhead but i also avoid disturbing important things like cache state okay so in 61c you talked about the power of caches we'll talk about the power of caches when we get to file systems but by not switching but rather running for long periods of time what i do is i get a chance for the cash state to build up and then for me to take advantage of it so high throughput uh is often in direct contrast to response time those two are conflicting with each other another one which is really funny is fairness you know what does that mean and there's so many different versions of fairness you could say well roughly i'm sharing the cpu among users in some fair way but fairness is not about minimizing average response time because the way i get better average response time is by making the system less fair anybody tell me why that would be so why is better average response time achieved to make it by making the system less fair that makes sense to anybody there you go people who make more requests because they have a lot of bursts get priority and they get priority over other users so the mere act of having bursts gives you more cpu okay i'm going to give you a funny instance if we get to that today of uh somebody in the early days of computer othello for instance playing each other figured out that by putting print statements into their othello code they could get more cpu time than the other guy and get an advantage so so let's start with the first obvious thing here which is first come first serve or fifo and really you could think of this as run until done so in really early systems first come first serve basically meant uh you submit your programs in a big queue at night and you'd come in in the morning and they would have run uh one after another and fifo order now that was the original notion you ran everything to completion today we basically run the cpu until the thread blocks so what it says is if you have a cue the ready queue you put things on the end of the ready queue when you get pulled off the cpu or woken up from io and then the the scheduler just grabs the next one off the head and keeps going down the queue one at a time in fifa order okay and so to show you what this means here's a gantt chart i'm sure you guys have run into these before but it's basically showing uh a set of a sequence of events in time and what we're seeing here is an example where uh three processes p1 p2 p3 came in uh to the queue the ready queue the first one had a burst time of 24 second one had a burst time of three the third one had a burst time of three and so we run for um 24 until the burst is done then we run for three then we run for three and um if we were to view the users of these processes uh we can ask ourselves what's the waiting time uh for them okay so process one doesn't wait at all okay because they start right away process two has to wait 24 time cycles whatever this is process three has to wait 27 and so we could average the waiting time there by so zero plus 24 plus 27 over 3 and that's 17. and we can talk about average completion time right p1 ends at 24 p2 ends at 27 p3 ends at time 30 and that gives us an average completion time of 27. so this average waiting time and average completion time end up being metrics that we could optimize for if that turned out to be something we wanted to do now the big problem with first come first serve scheduling there are many problems but let me just show you uh the biggest one here is what's called the convoy effect which is short processes are stuck behind long ones this is also the you know five or less item problem at safeway where you come and you only want you know a bottle of milk and some chips and you try to go in the short line and some person in front of you has decided that their card full of 50 items fits into the five item requirement you've just succumbed to the convoy effect because you are now serialized behind that long job okay and so here's the convoy effect with uh first come first serve scheduling what happens is we see when arrivals are coming here so the here the arrivals are showing up this is the actual execution at the top we had a blue one executing and then the green one arrived uh at this point and the green one doesn't get to start until that point and so on and now the the uh dark green one gets to run okay and then the red one shows up and the blue one shows up but now the red one is long all right um and so that blue powder blue one now is stuck for a very long time so if you look at the q here that's what's going to build up the red one is now running but now the blue one is is cued and another one comes along and another one comes along and another one comes along and all of these tasks are stuck while the red one is running and then when it finishes they all finish out pretty quickly so this is a convoy of jobs that are all stuck waiting for some long job that's why it's called the convoy effect okay all right and the funny thing about this is if i were to just switch the order of what i showed you earlier where i showed you p1 p2 p3 if they were to come into p2 p3 p1 then look what happens here p2 gets to run quickly p3 gets to run quickly and then p1 runs and so the waiting time for p1 is now six so notice that p1 only has to start six units of time before it starts running p2 has zero and p3 has three the average waiting time now is six plus zero plus three over three which is three the average completion time is three plus six plus 30 over 3 which is 13. and if you compare for what we had before just because p1 arrived first before rather than last notice that we had an average waiting time of 17 when p1 showed up first now it's only three we had an average completion time of 27 when p1 showed up now it's 13. so you can see that uh first come first server fifo has this problem that uh you know it's very uneven as to how you service things okay so the pros and cons of this are um you know short jobs get stuck behind the long ones is a definite con it's simple okay so that's a pro but um and it's going to be the simplest we come up with in this uh set of lectures but um this is really the safe way to get the milk effect um you know um i guess the good thing is you can sort of read the the rags there while you're waiting for that other person to get through and find out about the space aliens that have landed somewhere in nebraska but um it's always getting the milk yup the milk is is the important thing here in uh operating systems okay we haven't spilled any yet though so we'll have to see what uh what happens when we spill the milk but all right so let's see if we can do better okay because this this uh unevenness with scheduling seems like a downside at minimum and uh and then you know there's got to be something better so the simple thing we can do is what's called round robin scheduling and um this is going to be our very first stab at fixing first come first serve and really the first come first serve i mean let's look at this this is potentially very bad for short jobs which is going to be very bad for responsiveness to users right here i'm showing you the best first come first serve the previous worst first come first serve was extremely bad and we don't know are we going to be responsive or not and that just seems it's going to annoy the users right and so um what else can we do so that's a robin there because you know i can do cut and paste and put in clip art but the round robin scheme is going to be all about preemption okay so every process is going to get a small amount of cpu time and uh we're going to call that a time quanta in typical operating systems today it's 10 to 100 milliseconds and we're only going to let jobs run for a time quanta and then we're going to preempt them and move on to the next one okay so after the quanta expires timer is going to go off this should sound familiar to everybody the process is going to be preempted and it's going to be put on the end of the ready queue and the next one in fifa order is going to be pulled off the front and this preemption is going to uh give us different behavior now we can do some analysis very quickly about this right so if we have n processes in the system um then uh this is uh we can figure out that every process if it's running for a long time gets one over nth of the cpu time okay and yes this is a as uh said on the chat a quantum leap in scheduling so in chunks of it most queue time units we can see basically if there's n processes in time chunks of q time units that no process is ever going to wait for n minus 1 times q time units so there is now a minimum uh excuse me there's a maximum amount of time we have to wait which is going to mean that uh we have so sort of a minimum level of responsiveness that we can get out of the system at least so as to not uh annoy users too much now the system i talked about earlier where i was typing in whole sentences took a long time to show up was still like this but it was a situation where ann was so large that this time got too large okay um so what about round robin scheduling so the performance well if q is extraordinarily large that's the quantum time we we reconverge back to first come first serve right if q is really small we interleave okay and that actually i guess if you thought really small you might think of this almost like what the hardware hyper threading does um so q clearly has to be big enough that the overheads don't kill us so the context switching isn't the only thing we do we actually do some computation but it can't be too big or we don't get the benefits from a responsiveness standpoint so here's an example of a round robin with time quantum equal 20. so i have a set of four processes here i'm going to show you the gantt chart for this uh process one's got a burst time of 53 process 280. excuse me let's try that again process one has first type of 53 process two has eight process three has 68 and process four has 24 and so process one being the first one to arrive runs for 20 and the timer goes off okay and the question a good question here is do we know the burst time a priority no all right we're going to talk about that in a moment so burst time is some magical prediction in the future which we're going to have to address in a moment however i can tell you after the fact if i've observed what happened i know what the burst time was because i know how long it ran okay so in the next few slides assume that what's happening here is i say well i knew how long that was going to run and i'm just playing with the scheduling to see what's different okay we'll get to where burst time comes from in a bit okay because nothing we're doing uh is based on burst time yet but so process one is it's got at least 53 uh cycles it's got to run and so it's going to run for 20 and then the time route is going to happen and it'll get be put on the end of the ready queue and the next one which is process two is going to come up but it's only gonna get to run for eight so why does process two only run for eight instead of twenty because it's done right it doesn't have anything else to do so it finishes at eight process three comes around runs for its twenty process four runs for twenty process one gets back again it gets to run for 20. okay process two doesn't run because it's gone process three runs for another 20. and here's the rest of it we won't bore you with all the remaining details but if we were to come uh compute the waiting time we would see that we get uh 72 for process 1 20 for process 2 85 for process 3 and 88 for process 4 and we can come up with average waiting time and average completion time which are these two numbers 66 and a quarter and a 104 and a half okay so the good thing here is uh now this is a good question that was posted what happens if somebody uh uses fork to create way too much uh way too many processes and thereby take this over okay so right now we're talking about a situation where every process not necessarily every user but every process is given an equal amount of time now the way you start dealing with malicious users like as being talked about in the chat here is that's when you start noticing that a given user has too many processes or they're creating them too frequently and you put a restriction on how rapidly they're able to create them or how many they're able to create okay but good good observation there if somebody creates a lot of processes they can tie up everybody else because we are really giving equal weight to every process right now so round robin pros and cons one it's better for short jobs how do i know that well if you look this short job p2 got to run starting at cycle 20 rather than waiting until cycle 53. so the fact that we're running things around robin means those short jobs come up much quicker than they would in the first first come first serve basis okay so how do we let's just look at a couple examples so suppose we have two tasks one with a burst link of ten and one with the burst length of one if we run them first come first serve we get an average response time of 10.5 with a quantum of 10 because notice the quantum of 10 basically just runs t1 to completion and t2 comes afterwards with a quantum of 5 what happens is uh we run t1 for five and then tc2 gets to run so if you notice the slightly smaller quanta the half of the quantity here basically gives us better response time okay now you could say well this is interesting why don't we just set q equal to you know 0.0001 okay and and why don't we set q equal the smallest number we can come up with to get things more responsive overhead yes good answer so switching's expensive so here is an example where the two threads are the same length uh burst time and there if the quantity is 10 versus one we get equal responsiveness okay and why is that well that's just because they're both equal burst length and so the fact that the quantity are equal treat them similarly okay uh if we have burst length that's uh quant equal one and the quantity gets too small notice what's interesting here is our average response time actually went up and the reason is that this green thing actually ended up taking longer uh it had to wait more because it ended up having to wait a little bit for the blue one because of the interleaving so just because we have a small slice doesn't necessarily mean that the average completion time necessarily goes down so you have to be a little careful here okay now um how do you implement round robin in the kernel well uh you start with a fifo q just like in first come first serve but you have a timer okay and the timer goes off on a regular basis you set the quantum um how do you set the quantum well that's actually a configuration parameter in the kernel but as i mentioned usually it's set to 10 or 100 milliseconds uh if you don't change anything right and a timer interrupt goes off and you use that to take the current uh thread off of the cpu and pull the next one off the ready queue so this we've been talking about pretty much since uh day one actually we just weren't calling it that and of course uh you have to be careful to synchronize things and so on so that the cues don't get messed up in the process all right so this is all about project two scheduling so you're gonna get to start thinking about this when project two shows up you get to actually implement some scheduling okay that's something to look forward to so how do you choose a time slice if it's too big we end up with response time suffering if it's infinite we get back fifo if it's too small we have throughput suffering because there's way too much overhead in switching actual choices of time slice the initial unix which was intended more for a batch mode was about a second which means that when people are typing rapidly you just you were not seeing responsiveness okay and you could end up if you had three processes you could end up with three seconds per uh keystroke and so um you're trying to balance short job performance and log jump long job throughput and that's where this 10 to 100 milliseconds kind of comes into play if you know anything about hci you know that um that 10 to 100 millisecond range is kind of where the responsiveness comes into uh play as well for responsiveness for other things that humans can notice okay the typical context switching overhead is like point one milliseconds to a millisecond and so they're really targeting about a one percent overhead uh no more in contact switching um now the question of can you have the scheduler discriminate priority by program id or user id absolutely we'll get there okay because we're doing right now we've only been treating everything exactly the same and so we'll see uh you can imagine that minor might not be the right thing to do okay so comparisons between first come first serve and round robin so assume there's zero cost context switching just for the moment is round robin always better uh well here's a simple example where 10 jobs each take 100 seconds of cpu time around robin's scheduler quantum of one second and all jobs start at the same time okay so if you look if we're doing fifo what happens is the first job runs for 100 seconds the second one for 100 seconds and so on and when we're done we end up at uh the thousandth second second 1000. if we're doing round robin and we're circling every second what happens is the first job isn't done until uh cycle 991 and then the second one finishes and so on and so in this situation the average response time is tremendously worse in the round-robin case than it would be in the fifo case so if you're talking about a lot of identical very long jobs round robin is just not the right thing to do all right and that's because you slice everything up in little pieces and therefore the jobs have to run for um you have to interleave for many many uh periods before they finally finish and of course when we put context switching overhead back in that becomes really bad okay and the cash state has to be shared and so i just can't keep uh can't overemphasize this is an issue in the fifo case if i get to run for 100 seconds i get all of the cash for that one job and the processor runs you know it hums along like a race car here if i'm switching over and over and over again the cash state doesn't get a lot of time to to build up now of course those of you who are really paying attention realize that one second is pretty long in cpu time but certainly fifo gives you much better use of the cache here's a kind of an interesting thing so here's uh here's an example of um first come first serve scheduling for uh process one two three and four who happened to you know process one shows up then process two then process three then process four if i had sort of uh you know i was an oracle and i knew the future i could reorder them to give me my best first come first serve behavior which would be to do the shortest one first then the next one then the next one and the next one okay yes the colors are eye searing isn't that wonderful um so if we look at uh this the best first come first serve basically gives us an average wait time of 31 and a quarter whereas the worst is 83 and a half and the completion time the best first come first served everything's done with an average time of 69 and a half whereas the worst case it's average 121 and three-quarters so this is the vast difference between first come first serve best and worst case in the middle is for instance a quanta of eight gives us a wait time of 57 and a quarter that's kind of between the two right and a completion time of 95 and a half which is kind of between the two so the thing you can get out of this other than uh aren't these uh vibrant colors vibrant is that by using round robin we can find a way without knowing anything about the jobs or when they arrive to come up with a fair to middling response time wait time and completion time okay and that's why round robin is often used as a default simple policy because just by switching you kind of get rid of the worst behavior of fifo okay and then we could go in the middle here and look at some other options uh it's kind of interesting that you know there isn't any obvious best quanta because as you notice as i get up from eight on either side things go up and so on and so you know the pro another problem with uh round robin is you know what's the ideal quanta well you pick one that works pretty well for everything and you stick with it it's the standard thing all right um notice here that the p2 is the shortest uh job okay and notice that the difference between best and worst is horrendously bad for p2 right so the best first come first serve is zero wait time and the worst is 145 the best completion time is eight and the worst is 153 so that poor p2 man it's affected by scheduling because it's short p3 is the longest look at that it hardly even notices right the best first come first serve you know it has to wait on average you know on average 85 the worst zero um the completion time is 153 or 68. yeah you know there is some difference there but mostly the long jobs don't notice and the short jobs really do so what we're getting out of these kind of scheduling decisions we're making are really targeting how can we give ourselves better responsiveness while not disturbing the long things that need to run efficiently and by and large the long things don't really notice too much right unless you really give continuous priority to short things so the long ones never run that's a problem but by and large the short ones uh basically do better if you if you uh schedule and those are the ones that the users care about okay so how do we good so there's a question here how do we know what's short and what's long right now all of the things we've done are oblivious to the length we don't know anything about the burst time what we're doing is analyzing what happened after we found out but you could imagine we start remembering things like the last time p2 uh woke up it was really short therefore it might be really short again very good okay and so the hardware is doing the interrupts to handle the keystrokes coming in but ultimately the operating system has to take over to actually forward the resulting keystrokes to the right application so yes the hardware does what it can but eventually the device driver has to take over and then that has to forward on to user threads okay now suppose that we want to talk about handling differences in importance between different threads okay that was brought up earlier and we could start doing something called priority queue okay and the priority queue is uh something like this where we have different priorities and we run everything from the lowest or from the highest priority first and then go on to the next ones down and as long as we have really high priority jobs then uh we run those at the expense of the lower priority ones now this question uh that showed up in the chat can't we hard code keystrokes to be handled every 50 milliseconds whatever you have to have a a scheduler that can know how to take over or preempt from a long-running job in the instance of a keystroke okay and priority is something you could do so you could make um threads handling uh user uh events to be higher priority than one's not okay that could be an option okay and uh unfortunately it's very hard to know for sure uh that this thread always ought to be given the highest priority because there could be situations where things absolutely have to run and your highest highest priority thing just keeps running and everything else doesn't run okay and you start getting into live lock problems so um and you don't always know what ought to be the highest priority job unless a user tells you and then you don't always believe them because everybody's going to always say well my thing is the most important so that's why scheduling is such a tricky thing right how do you know the right scheduling policy to make everybody happy so the execution plan and a priority scheduler is always execute highest priority rentable jobs to completion and you could say that each queue in this could be processed round-robin with some time quanta okay and so in this scenario here perhaps priority three is highest make sure you always know for sure what your highest priority is before you make a conclusion sometimes zero is the highest in some scenarios but here we're going to say three is the highest we could handle the jobs in priority three in ron robin where we just keep cycling through all the priority threes until there are no jobs left and then we move on to priority two and if a new priority three one comes along we'll immediately preempt and start running priority three again so that's how a priority scheduler works the problems that show up are among other things starvation where lower priority jobs don't ever get to run because of high priority ones and ultimately there's forms of deadlock or priority inversion which is closer to a live lock which happens when a low priority task grabs a lock needed by a high priority one so imagine here the job six is running along and it grabs a lock and then job one comes along and job one tries to run and it tries to grab the lock but it can't because job six has got it okay so that is a priority inversion now that simple case i gave you where there are only two threads in the system it turns out it's not a problem why is that so why job six has the lock job one tries to run and grab the lock and uh but it's being held up how do we how does that resolve anybody figure that one out so job three tries to grab the lock it goes to sleep who gets to run again job six job six eventually finishes gives up the lock and immediately job one gets to run because they're high priority so that one resolves okay but this comment about giving priority to the job with the lock is important because it could be uh if there were many jobs in the system what really ought to happen is priority three uh ought to hand its priority to priority zero job six long enough for six to release the lock and then let job one run that's called priority donation and you get to try that out in project two get to figure out how to implement that but the other thing that's interesting about this high priority this priority inversion problem is if you have a third task okay so job six has a lock job one's trying to grab it gets put to sleep job four is an intermediate task it starts to run and it's running continuously now we have a problem and the reason we have a problem is job one needs to run but it can't because it's waiting for job six which won't run because job four is running okay and this is uh this is a situation where this won't resolve okay now the question is why is this a priority scheduling issue the answer is because we've set up a scenario where priority two is running continuously but priority one is higher priority but it can't run because it's waiting for the low priority six so we have a priority inversion where um job four is essentially pre preventing you could look at it this way job one from running because it's preventing job six from releasing the lock okay now uh what's interesting about this uh is this kind of priority inversion is exactly what uh almost toasted the martian rover um and i'll tell you a little bit about that maybe next time but there was a situation where a low priority job grabbed a lock on a bus and a medium priority job was running but the high priority thing needed to get in there and there was a timer that figured out there was a problem and it would keep rebooting the rover but it would get stuck in this priority inversion situation so how do you fix this well you need some sort of dynamic priorities got to adjust the base level priority somehow up or down based on heuristics and one thing like i said is this priority donation where job one as it's going to sleep gives priority to job six because it knows who's weight who's holding the lock all right so what about fairness so this strict priority scheduling between queues is unfair you run the highest and then the next and so on long-running jobs may never get the cpu there was an urban legend for a long time that in multix which was one of the original uh multiprocessor um multi-process machines running at mit that they finally shut the machine down years later and they found a ten-year-old job that was still running now that's just an urban legend it wasn't true but the idea is there right the idea that uh things that are running in a priority world might prevent something else from ever running that was a background task and so there's a basically the trade-off here is fairness which is everybody gets to run is being hurt by average response time okay um and if you're asking about would the priority uh inversion be resolved when the intermediate task finishes maybe unless there are other intermediate tasks i mean the fact that you have a high priority task means you want it to run right away and the fact that something lower is preventing it means you've completely taken over the priority scheme that was the designer came up with that's a problem so how do you implement fairness well you could give each queue some fraction of the cpu so if there's one wrong long running job and 100 short ones um you know what do you do well it's sort of like express lanes in a supermarket you know sometimes the express lanes get so long uh you get better service by going into the other lines and so maybe there's a way to figure out how to give some uh cpu sort of to every queue maybe you increase priority of jobs that don't give service next time i'm going to tell you about several variants of unix schedulers including the order one scheduler that was uh linux standard for a long time up until like 2.4 and it basically had this dynamic scheme where it would figure out and it would continuously adjust priorities up and down based on things like figuring out well this is uh must be an interactive task because the bursts are all short so the priorities go up but then this thing runs for a long time the priorities go down and there are all these really complicated heuristics trying to adjust priorities up and down uh based on what it thought that was happening okay so that is something people have tried but it's really hard to get right okay but what if we knew the future okay so shortest job first says you run whatever job has the least amount of computation to do okay and this is sometimes called shortest time to completion first there's also a preemptive version which basically says uh whatever job has the shortest remaining time first let it run a preemptive version of sjf would be you know if a job arrives and has a shorter time then it gets to run okay but what's uh the problem with this and the reason i'm showing a crystal ball here is you have to have an idea for every job in your queue which one's the shortest remaining one okay and you can apply this to the whole program or the current cpu burst what have you but the idea is to somehow if we knew the future get the short jobs out of the system it has a really big effect on short jobs and responsiveness but only a small effect on the long ones and the result is better average response time all right so shortest job first or shortest remaining time first or the best you can do at minimizing average response time so you could prove that these are optimal if you knew the future to compare srtf with first come first serve what if all the jobs are the same length well shortest remaining time first just becomes the same and the reason is if you have a bunch of jobs that are all the same length and you start running even one of them it's now shortest from that point on and it just runs to completion okay so srtf degenerates into fifo when everything's exactly the same length if the jobs have varying lengths then the short jobs always get to run over the long ones so this is almost like in that uh that eye peeling colored slide earlier where i showed you the difference between the best first come first serve and the worst this is like srtf could somehow figure out what the best uh first come first serve was by always picking the shortest jobs and running them first okay so now a question could we use a neural net policy maybe um but let's let's talk about the benefits here for a moment okay so assuming we can predict the future so um here's an example where we have a and b are long cpu bound jobs that run for a week c is an i o bound job where you run for a millisecond of cpu and then you enter the uh the disk and run for nine milliseconds and then you run for a millisecond to figure out what to grab next in nine milliseconds so this c job is kind of like something you might get if you're copying from one disc to another okay and um if c is running by itself notice that you get 90 of the disk okay only if c runs by well by itself if i somehow disturb this one millisecond in here and take much longer then my my disk portions are always going to take nine milliseconds but the time to get there is going to be longer and i'm not going to be keeping the disc busy okay a or b always get 100 of the cpu and so they can they can run well um for long periods of time okay and you know here i'm saying that they run for a week and c runs for short periods of time so with first come first serve what happens is once a or b gets in then c doesn't get to run for a week and i get no bandwidth out of the disk what about round robin or shortest remaining time first well here let me show you an example here's an example of round robin with a hundred milliseconds time slice where c runs for a millisecond um and then while its disk is going on for that nine milliseconds a runs for its 100 millisecond time slice then b runs for 100 milliseconds and then c gets to go for its millisecond and then it gets disk io and what happens in this round robin with 100 milliseconds which might be default on linux is you get a four and a half percent of the disk rather than our target 90 of the disk use if i get my round robin to be a millisecond i've got all of this switching and overhead and now i get my disk utilization back to 90 but boy this looks very wasteful right on the other hand srtf does exactly the right thing why is that well c runs because it's short and then while the disc is going on c is not even on the q a let's say starts running and a as soon as it starts running it's now shorter than b so a will always get to run in preference to b a runs until the disk interrupt comes back and now c is scheduled to run but c is shortest and so c gets to run and in this instance i get back my ninety percent utilization uh entirely but i have a hundred percent cpu utilization so this looks pretty good right again assuming i know the future so problems here might be starvation if you noticed earlier this particular oops example i gave here is not running b so b is starved until a is done and then b gets to run okay so srtf can lead to star main starvation if you have lots of smaller jobs and large jobs never get to run so you need to predict the future and uh well some systems might ask the user well how long does this test take how long does this task take i challenge you when was the last time you knew exactly how many milliseconds some code was going to run probably not until after you ran it once right so um and the other thing is you know users not only are users clueless but they're not always honest okay they may be dishonest purely because they're hoping to optimize their use okay so the bottom line is you can't really know how long a job will take for sure if we could predict that you know i have some stock to sell you right but um we can use srtf as a yardstick for other policies because it's optimal can't do better so the pros and cons are it's optimal because it's got the optimal average response time but the downside is it's very hard to predict the future and it's really unfair okay now if you hold on for a moment i want to give you a little bit more on this so first question and this was great it was already brought up in the chat is how do you predict the future well we do we predict the future using all of those techniques that people use right now to predict the future um the great thing about cpus uh and typical applications is there's a lot of predictability in them and so if we want to change policy based on past behavior um we basically exploit that predictability so an example might be that srtf with an estimated burst length um we could use an estimator function okay like a coleman filter all right or here's the the simplest common filter which is really exponential averaging where i have some alpha and i just predict the average okay oh no common filters has just been declared on the chat um i will i will say that calm and filters do have their place um you're welcome to put some sort of machine learning in here um there's always a trade-off however between um the cost of doing the prediction and uh and the benefit because there is overhead to doing the prediction so um anyway as you can see there are ways for us to predict the future okay um another thing we could do is uh now let's target some of these different places so one of the things that is wrong with srtf is it's very unfair so here's another alternative which i want to introduce called lottery scheduling this is going to be very short but the idea is you give each job some number of lottery tickets and on each time slice you're going to randomly pick a winner and uh on average the cpu time is proportional to the number of tickets okay and so you assign tickets uh just like in srtf you could give short jobs more tickets and long jobs less tickets and then probabilistically the short jobs will get more probability to run than the long ones okay and to avoid starvation since every job potentially gets at least one ticket we know that when we cycle our way through all the tickets uh that everybody will have gotten to run a little okay so this unlike srtf which could actually shut somebody out indefinitely lottery scheduling doesn't and lottery scheduling is closely related to other types of scheduling which are basically trying to optimize for uh for average cpu time okay so the advantage over strict priority scheduling it behaves gracefully so as you add more items then um and you redistribute the the tickets graceful uh gracefully everybody still gets to run okay so here's an example um and uh we'll we'll get you out of here pretty closely um to study some more but um for instance if i have one short job and one log job and i give 10 tickets to short jobs and one ticket to the long job then um the percent of cpu each short job gets ends up being 91 and the long jobs get nine percent well how do i know if they're short or long well i use something like uh my predictability my average filter earlier um if i have uh two long jobs they each get 50 well that sounds good um if i have two short jobs each get 50 okay now if there are too many short jobs to give a reasonable response time uh then perhaps we're overloading the the overall machine okay so there is a point at which scheduling just can't fix the fact that you don't have enough resources okay so that's a that's a topic for another day so how do you evaluate a scheduling algorithm and we'll leave you with this thought here um you can model it okay so these scheduling algorithms are mathematical things you can come up with a queuing theory uh evaluation and apply some um some uh jobs to it mathematically and figure out how that goes although that's typically a very fragile type of analysis it's very hard to be generalized um you could you know uh you'd come up with something just using average queuing theory rather than sort of transient cumin seeing theory that's a little simpler but still not exact you can build a simulator which actually puts puts in a trace of how things are actually going and then simulates the results that's often what happens with schedulers sometimes people just go ahead and uh toss a new scheduler onto a system and run it and see what happens um schedulers unfortunately as we're going to talk about next time can get so complicated that people have no idea what they're doing and why and that oftentimes leads to people complete breaks in code so uh the o1 scheduler in in linux was tossed out rather unceremoniously by linus to uh come up with a cfs scheduler and that was because it was getting so complicated nobody understood it anymore okay so we'll finish this up next time but we've been talking about scheduling now we talked about ron robbins scheduling which is the simplest default scheduler you give each thread a small amount of cpu when it executes and you cycle between all the ready threads the pros is it's better for short jobs which gives us a way in to optimizing for responsiveness we talked about shortest job first and shortest remaining time first so the idea there is you run whatever job has the least amount of computation to do and it's optimal for average response time the downside is you have to predict the future and we talked about uh various ways of predicting the future including various versions of kalman filters like just the moving window average you could have some machine learning example or something more complicated we're going to get to multi-level feedback scheduling next time which is a like a happy combination of a couple of things where you have cues of different priorities and those queues each have a different scheduler on them and at the top we have short quanta and at the bottom we have fifo and so we're trying to approximate srtf in a way that um optimizes for really short jobs and gives them cpu quickly but still gives you good behavior for the long jobs all right and we also talked about lottery scheduling uh giving each thread a priority dependent number of tokens all right so i think with that we'll bid you a do good luck tomorrow um i'm pulling for all you guys i'm sure it's gonna be great and i hope you uh i hope you have a wonderful weekend after that so that once you're done uh with the test you can get yourself a little bit of relaxing time all right goodbye everybody good luck |
MIT_1034_Numerical_Methods_Applied_to_Chemical_Engineering_Fall_2015 | 18_Differntial_Algebraic_Equations_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. WILLIAM GREEN, JR.: Shall we get started? Yes. Apologies for the notes being posted so late. I had trouble with the new two factor authentication system, which I need to get into Stellar and put things up there. My phone wasn't working so I had to find my iPad someplace and use that to get credentials to log in. So if you don't have them don't worry about it. The full notes are posted online, no missing pieces. So you can utilize those afterwards if something is unclear. All right? So your quizzes are graded. The TAs have them. I think overall, we were really pleased with the outcome of this quiz. It is very conceptual in nature and we thought people really understood a lot of the material quite well. And you can see that from the distribution of scores. So the mean on the quiz was about 77, a standard deviation of 12. I think everyone did really quite well. And on a number of the problems, perfect solutions were posted. So I'll take a second here. If there are any questions about the material that was on the quiz, or the format of it that you'd like to have addressed, I'm happy to answer them. No? Does it seem like it took too much time to complete all three questions in the two hours allotted? Or was it about the right length of material for the time slot? So we vetted it with the two TAs. And it took each of them less than an hour to complete it. So that was a good sign that we thought you guys should be able to get through most of the material in two hours. So let's recap. We're going to move on today to a topic called differential algebraic equations. But before doing that, there are a couple of concepts I want to review from numerical integration, or actually concepts that weren't covered when we talked about numerical integration on Friday that I think are important. And then I want to review briefly implicit methods for ODE-IVPs because those are going to be important for differential algebraic equations. So for numerical integration we talked about quadrature of various integrals, how to develop quadrature formulas. But there's actually one type of integral we didn't talk about there, which is sort of improper integrals, where the limits of integration are unbounded. And these come up all the time in engineering problems. We'd like to know what the, say, integrated response of some function is over all possible times. And we may have to evaluate that numerically. So how do you do those sorts of integrals? And it turns out the strategy that's most often used is to divide this domain up into two pieces. One piece, which is proper integral over a finite domain, and another piece, which is an improper integral over an infinite domain. This first integral, you can handle it with an ODE-IVP method or some higher order polynomial interpolation. But the second one we have to do differently. And there are couple of ways to do this. One is to transform variables. So you map this domain onto a finite domain. And the other option is to substitute an asymptotic approximation. So we say, when, say, tau is large, f of tau has a characteristic asymptotic approximation. Maybe it's exponentially decaying. And we take the integral of that asymptotic approximation. And the more accurate that approximation is the better our approximation for this whole integral will be. There's another type of interval that we have to do, where the same idea can be applied. That's the integral over integrable singularities. So here's an example. So say we want to integrate the function cosine of tau over square root of tau from 0 to some finite positive time point. As tau goes to 0, cosine goes to 1 and square root of tau diverges. But this integral actually has a finite value. This 1 over square root of tau, that's an integrable singularity. So how do you do this? Well, you want to split the domain again into two parts. One part that contains the integrable singularity and the other part, which excludes it. And in the part that contains the singularity we might do an asymptotic expansion of the function, and integrate each of those asymptotically accurate terms separately. So the integral of 1 over square root of tau is going to give us 2 tau 0 to the 1/2. The integral of this ratio here is going to give us minus 1/2 t0 to the 5/2. And then we have an integral over a finite domain, which doesn't include the singularity. And the accuracy of this method can be dictated by two things now. One is how accurately can we do this integral. And the other is how accurate is our asymptotic expansion here. So if we want higher accuracy we need more terms. We might need to be selective about how we choose the end point for this domain in order to minimize the error. There are lots of methods to do this. Sam. AUDIENCE: Will you talk a little more about what typical integral [INAUDIBLE] is? WILLIAM GREEN, JR.: Yeah. So you know these things. If I try to integrate 1 over x, as x goes to 0 this thing will always diverge. You know the antiderivative of 1 over x is going to be the log. And the log diverges. So power is weaker than 1 over x, like 1 over x to the 1/2. Don't diverge like the log. They actually go to a finite value. The log is like the most weakly singular function there is. And integrals over weaker powers of 1 over x actually don't produce a singularity anymore. They're what we call integrable singularities. The function itself diverges but its integral does not. So you usually have this asymptotic power law type behavior. OK. So they come up all the time. We see these things in places like, say, squeezing a thin film of fluid between two plates. How long does it take for these two plates to come together? You might say, well, as the plates get closer and closer together the pressure starts to diverge in the gap and they'll never come together. But maybe depending on the geometry of the plates there can be a sort of finite time at which they'll come together. Depends on their geometry. So that's one topic. I think that's useful because these things come up all the time. And you can't actually handle those cases with quadrature by itself. The function diverges. There's no polynomial interpolation that's suitable to match it. So the quadrature can't handle it. And if you're trying to integrate over an infinite domain, well, good luck. You're never going to be able to fit enough points in to know that you've accurately integrated out to infinity. The other thing I want to recap here, and this is a problem for you to try. So here we have a simple first order ordinary differential equation. We want to use the implicit Euler method to solve it. So can you give a closed form solution for one step of the implicit Euler method, or for n steps of the implicit Euler method? What result does that produce? Can you compute this? Do you guys talk about backward Euler methods for for ODE-IVPs? Backwards difference methods in general and the backwards Euler method specifically? No. OK. Well, clearly I should have been here. So what's the strategy? We have to approximate the derivative. Yes? So the approximation for the derivative we use is a finite difference approximation. So we write dx dt as x at a point k plus 1 minus x at the point k divided by delta t. Backwards difference. We evaluate the right-hand side of the ODE-IVP at k plus 1, xk plus 1. And then we solve for xk plus 1. We just substitute in here. Backwards difference approximation for the differential. And xk plus 1 for the right-hand side. And we find that xk plus 1 will be 1 over 1 minus delta t times this lambda times xk. And if we iterate this from k equals 0 up to some finite k, it's like multiplying by powers of 1 over 1 minus delta t times lambda. Yes? Does this makes sense? So our backwards Euler solution to this fundamental problem is going to look like this. And a lot of times we ask about the stability of these sorts of solutions. So if for example, the solution to this equation was supposed to decay to zero, we would expect our numerical method to also yield results which decay to zero. We would call that stable. So we all know under what conditions does xk, for example, go to 0 as I change the product delta t times lambda. Lambda could be a real number or it can be a complex number too. Doesn't really matter. So what needs to be true for xk to the k to 0 as k gets bigger? Well what needs to be true is this quantity in parentheses needs to be smaller than 1 because I'm taking lots of products of things that are smaller than one. That will continue to shrink xk from x 0 until it goes to zero. If this is smaller than 1, then this quantity down here needs to be bigger than 1 in absolute value. And then I'll get solutions that slowly decay to zero. So if I ask, for what values of delta t times lambda is the absolute value of this thing bigger than 1, and I allow lambda to be a complex number, for example? I'll find out that's true when 1 minus delta t times the real part of lambda squared plus delta t times the imaginary part of lambda squared is bigger than 1. And if I plot the real part of delta t times lambda versus the imaginary part of delta times lambda, this inequality is represented by this pink zone on the outside of the circle. So any values of delta t times lambda that live in this pink area will give stable integration. And the solution xk, well, the k to zero as k goes to infinity. So it makes sense? This is a little funny though. What does the solution to this equation do when lambda is bigger than 1? When it's bigger than 0, excuse me. When lambda is bigger than 0, what does the solution to this ODE-IVP do? Does it decay? AUDIENCE: [INAUDIBLE] WILLIAM GREEN, JR.: It blows up. It grows exponentially. But the numerical method here, when I have lambda is bigger than zero, can be stable and produce decaying solutions. So this is sort of peculiar. So we're often concerned when we solve ODE-IVPs and we try to use these sorts of backwards difference methods, these implicit methods, to solve them. We're often concerned with getting stability. We want to get solutions that decay when they're supposed to decay. Sometimes we get solutions that decay even when the solution is supposed to blow up. So that's sort of funny. So the exact solution should look like this. But there are going to be circumstances where lambda is bigger than zero. In fact, lambda is bigger than-- lambda delta t is bigger than 2 along the real axis, where the solution will actually blow up rather than decay away. So stability and accuracy don't always correlate with each other. We really have to understand the underlying dynamics of the problem we're trying to solve before we choose a numerical method to apply to it. And this is going to be true of differential algebraic equations as well. Are there any questions about this? There's a discussion of these things in your textbook, these sorts of stability regions associated with different backwards difference methods. The implicit Euler method is one backwards difference formula. It's the simplest one. It's the least accurate one that you can derive but it's an easy one to use. But now we want to move on to a different sort of problem. And it's one that we come across in engineering all the time. These problems are called differential algebraic equations. And they're problems of the sort a function of some state variables, and the derivatives of those state variables and time is equal to zero. And there are some initial conditions. The state variables at that time 0 is equal to some x0. So this is a vector valued function of a vector valued state and it's time derivatives. And we want to understand the dynamics of this system. How does x vary with time in a way that's consistent with this governing equation? Usually we call these well-posed problems when the dimension of the state variable and the function are the same. They both live in the vector space rn. So we have n different states. I have n different functions that I have to satisfy. And I want to understand how x varies with time. Here's an example. A stirred tank. I call this stirred tank, example one. So into the stirred tank comes some concentration of solute. Out of the stirred tank comes some different concentration of the same solute. And we want to model the dynamics as a function of time. So we know that a material balance of the solute on the stirred tank, if it's well mixed, will tell us that d c2 dt is related to the volumetric flow rate into the tank divided by the volume of the tank multiplied by the difference between c1 and c2, the amount carried in and the amount carried out of the tank. And let's put a constraint in here too. Let's suppose we're trying to control this tank in some way. So we say that c1 of time has got to be equal to some control signal, gamma of t. And then we want to understand the dynamics of this system. At time 0, c2 is some c0, the initial concentration of the tank. And c1 is whatever the initial value of this control signal is. And the solution is c1 equals gamma of t. And c2 is, well, the initial concentration's got to decay out exponentially. And then I've got to integrate over how the input is changing in time. Those also produce exponential decay. So if the input is a little delta function spike then I'll get an exponential decay leaving the tank. If the input is changing dynamically in time, I need this convolution of the input with this integral. So you just to solve this system, this ordinary differential equation by substituting c1 in there. But this system of equations here is not just a differential equation. It's a differential equation and an algebraic equation. There's something peculiar about that that's different than just solving systems of ordinary differential equations. Oftentimes in models, we write down all the governing equations associated with the model. Some of them may be differential, some of them may be algebraic. There are times when we can look at those equations and see clever substitutions that we can make. But if you have a hundred equations, or a thousand equations, or a million equations, there's no way to do that reliably. We just kind of wind up with a system of ordinary differential equations and algebraic equations mixed together. And we have to solve these reliably. Here's another example. Oh, excuse me. So let me write this in this form. So we said there should be a vector valued function of the states and the derivatives of the states. The states are the concentration c1 and c2. And the derivative here is dc 2 dt. So here's my vector valued function. The first element is this, and it's equal to 0. The second element is this, and it's equal to 0. Here's my initial condition over the states. So I can just transform that list of equations to the sort of canonical form for a differential algebraic equation. Here's a separate example. Suppose instead, I'm trying to either control or make a measurement of the outlet concentrations c2. And we call that gamma of t. And I want to know now what is c2 and what is c1? So I've got to solve this system of equations for c2 and c1. The solution for c2 is easy. I know that. c2 is gamma, by definition. I got to substitute this into the first equation and solve for c1. And I see, c1 is gamma plus v over q gamma dot. And I have to have initial conditions to go along with this. There's something funny here. We can see the initial condition for c2 has to be gamma 0. The initial condition for c1, what does that have to be? Well, it needs to be consistent with the solution here. There were no free variables when I solved this equation for c1. It isn't like solving a differential equation, having a free coefficient to specify. So it better be the case that this c0 here is the same as gamma 0 plus v over q gamma dot 0. Somehow I have to know this input signal to prescribe the initial conditions for c1. So that's peculiar and different from just ODE-IVPs. You see this picture? You see how this goes together? The solution is funny too. Suppose we are trying to use this system of equations to do the following. We are trying to measure c2 and use the measurement of c2 to predict c1. So gamma is the measurement, and we're trying to solve this system of differential algebraic equations to predict what c1 is. All measurements incur numerical error. So even though our signal for gamma that we measure may be continuous, it's bouncing around wildly. It's not a constant signal. It's going to move around a great deal. And c1, our predicted value for c1, depends on the derivatives of gamma, which means c1 is going to be incredibly sensitive to how that measurement is made. So it's peculiar. It means that there's going to be a lot of problems with stably solving these equations because the solutions can admit cases where they're not integrals of the input but they're actually related to derivatives of the input. Look at the solution back here again. Oop, sorry. The solution here was related to an integral of the input. Suppose I was making a measurement of c1 in trying to predict c2 instead? My prediction for c2 would be related to the integral of that measured signal. And we know integrals smooth signals out. So it's not going to be so sensitive to the measurement. So one way of doing this seems really sensible. And the other way of doing it seems a little bizarre. Well, we can construct these equations however we want. So there's something about how we formulate models that are going to make them either easily solvable, as DAEs, or quite difficult to solve. So these pop up all the time in engineering problems. They pop up in dynamic simulations, where we have some sort of underlying conservation principle, or constraints, or equilibria. So if we have to conserve something, like total energy, total mass, total momentum, total particle number, total atomic species, total charge, there will always be an algebraic constraint that tells us the total mass has to stay this, or the total number of atoms of a certain type has to say this. Yeah, Ian. AUDIENCE: Just to be clear, were you saying, on the stirred tank example, that those were the same problem with different solution approaches? WILLIAM GREEN, JR.: So good question. Let me go back. So they are fundamentally different problems. So in one case I specified the value of c2 to be this gamma. I tried to make a measurement of c2 and then predict c1. In the other case, I specified c1. I tried to measure c1 and predict c2. So one case I measured the input and tried to predict the output to the tank. And in the other case I measured the output and tried to use it to predict the input. Yeah? AUDIENCE: So physically the same system with a choice by the operator. WILLIAM GREEN, JR.: Yes. That's right. That's right. So I formulated the model differently. The same underlying system but I formulated the model differently. And it led to completely different behavior in the underlying solution to the differential algebraic equations. So you've already solved problems that involve conservation. You've done things like ODE-IVPs on batch reactors, where the total amount of material in the reactor had to remain constant. Instead, you probably solved for the dynamics of each of the components undergoing the reaction. You might have checked to see if at each point in time the total mass in the reactor stayed the same. Because you solved all these differential equations, they were interconnected with each other but they all incurred some numerical error. It's possible that numerical error accrues in a way that actually has you losing mass from your system. So there may be benefits to actually trying to solve these sorts of systems of equations with, say, one less differential equation for one of the components and instead an algebraic constraint governing the total mass or total number of moles in the system. So these pop up all the time. You see them in models of reaction networks, where we try to use a pseudo steady state approximation. We say some reactions go so fast that they equilibrate. So they're not governed by differential equations but by algebraic equations for equilibria. They can pop up in models of control, where we neglect the controller dynamics. So you say I'm going to try to control this process and I get to instantaneously turn on or off this valve in response to a measurement. Actually, controllers have inherent dynamics in them. But if I don't put those dynamics in then I may get an algebraic equation instead of a differential equation for how the control process occurs. There are different types of DAEs that we talk about. So there's one type that's called semi-explicit DAEs. And in these types of differential algebraic equations we can write them in the form M dx dt is equal to f of x and t, some initial condition. You say that this looks like an ODE-IVP. M is called the mass matrix. And it may or may not be the case that M is invertible. M may or may not be a full rank matrix. So let's look at that stirred tank example one. This was the underlying equation, f of x and dx dt and t. Can write it in semi-explicit form. If c has two components, c1 and c2, then dc 2 dt-- oops, this is a typo here. I really apologize. This should be a 0, and this element should be a 1. I'll correct that online. I apologize for that. So this should be this matrix multiplied with this differential should give me dc 2 dt, this term here. And a 0 for the second equation. And then I've got q over v multiplying c1 and minus c2. So that gives me the first line here. I've got minus c1 coming on the second equation here. And those balance with a 0 and a gamma. So the lower line reads like this equation here and the upper line-- fix this typo-- replace 0 with one, reads like the upper equation here. This is the semi-explicit form of the differential equation. You can see this mass matrix is singular and has a row that's all zeros. There's no way to invert this matrix and convert it into a system of ODE-IVPs, which you already know how to solve. If the mass matrix is full rank, which can happen, you can formulate your model in such a way that it takes on this semi-explicit form. But if the mass matrix is full rank and invertible, then you just invert the mass matrix. And now you can apply all your ODE-IVP techniques to solve this thing. The mass matrix doesn't need to be constant. Could depend on the concentration. It can depend on the states as well in this form. I just chose an example where that isn't the case. But in general, it can depend on the states. If it depends on the states and we invert it, then we need to be sure that the mass matrix is invertible for all values of the states, which may or may not be easy to ensure. So here's what I'd like you to try to do. So here's that stirred tank example two. The only difference between this and the previous one is that c2 now is set equal to gamma instead of c1. I want you to try to write it in semi-explicit form. A sort of test of understanding. Can you define the mass matrix? Can you write out the right-hand side of the equation in semi-explicit form? Take a second, work with your neighbor. See if you can figure out how to do that. AUDIENCE: Is this clear? OK to do this? WILLIAM GREEN, JR.: You can look back on the previous slide. Actually, the semi-exclusive form is going to be the same. This is a 0, that's a 1. The only difference is going to be c2 is now balanced with gamma, the input. So this minus 1 is got to shift over. So I switched the 0 for the minus 1, you got the second, the semi-explicit form. There's another way that these equations can pop up. So we have this mass matrix. And it might be the case that M is diagonal over many of the states and then 0 for all the other states. So we might be able to write the equation in this form. This comes up all the time. So we might be able to write dx dt is a function of x, y, and t. And 0 is equal to another function of x, y, and t. And we have some initial conditions prescribed for the x's, of which we're taking the differential in one of these equations. And we've also got to satisfy those initial conditions for these variables y associated with the second nonlinear equation. The x's are called the differential states because they're involved in equations where they're difference with respect to time, or the differential with respect to time is taken. y are called the algebraic states because they only appear as themselves and not as their time differential in any of these equations. So we might want to solve for both the differential and the algebraic states simultaneously. We have to know these algebraic states to determine the differential states. We have to know the differential states to determine the algebraic states. AUDIENCE: Professor? WILLIAM GREEN, JR.: Yes. AUDIENCE: [INAUDIBLE] WILLIAM GREEN, JR.: That's right. So here the x that I have is not all of the states. It's not all of the unknowns. It's only the set of the unknowns of which I have to take a differential. That's right, that's right. So we've sort of tooled down the kinds of equations we might want to look at. We've gone from the generic differential algebraic equation to a semi-explicit form to the splitting, this differential algebraic splitting. Many problems can be formulated in this fashion. This one's the easiest one to think about so that's the one that we'll discuss for most of the lecture. So let's look at some examples. I think examples are interesting to think about. So here's one. Think about a mass spring system. There's an algebraic equation that governs a conserved quantity, the total energy of this mass spring system. So the kinetic energy, 1/2 m times the velocity of the mass squared. Plus the energy stored in the spring, 1/2 kx squared has got to be equal to the total energy. And the total energy can change in time. So we have a differential algebraic equation. We have a differential state because we're taking the time derivative of x. And we'd like to be able to determine x as a function of time. So f of x, x dot and t. That's this equation, 1/2 mv squared plus one half kx squared minus E is equal to 0. We'd like to solve this differential algebraic equation. It's well-posed. We have one equation for one unknown, one state. The equation has a solution, which is a times cosine omega t. It's not the only possible solution. It's a solution to this equation. We can substitute that solution into the equation and determine, for example what this oscillation frequency omega has to be. It's square root of k over m. Or what the amplitude has to be. It's the square root of E over k. So you've seen this from physics before. We've already solved differential algebraic equations. The thing you saw before though, was actually conservation of momentum. You converted this entirely to a second order differential equation in x and solved it. But in principle, we could have tried to find various solutions to this equation instead. We would have said, well, we've observed masses and springs and we know they oscillate in time. So let's propose some oscillatory solutions and see what values of the amplitude and omega we need to satisfy this equation. It would have been a perfectly acceptable way of solving this problem. The thing is, this mass spring problem, the differential algebraic equation contains nonlinearities with respect to the states. So it's nonlinear in the velocity and it's nonlinear in the position. Nonlinear equations in general are hard to solve. When you converted that to a momentum balance you got linear equations. You got the second derivative of position was equal to stiffness over mass times position. It was a linear differential equation to solve, which is easy to solve. So you've got higher order differentials, but you linearize the equation so it made it easier to solve, in a certain sense. But we'd like to sometimes understand what we can do with these equations. There are certain limiting cases where we can do interesting things with these equations. I'm going to show you one now. So in the case where the Jacobian of this function f with respect to the differential of the state variable. So take the derivative of f with respect to x dot and hold time in the state constant. So this is a matrix, one that's full rank. Then the DA be represented as an equivalent to ODE. Here's how you can show that. The total differential of f is this Jacobian with respect to x dot times the change x dot plus the Jacobian with respect to x times the change in x. Plus the partial derivative of f with respect to time, times the change in time. And this total differential has to be equal to 0 because f is equal to 0 for all states in time. So let's divide by dt. We'd like to know how f changes with time. So total differential of f with time. We divide each of these little differentials by time. And then we solve for dx dot dt. That's going to involve moving this term to the other side of the equation and inverting this Jacobian. So if I can invert the Jacobian then I can convert my DAE system into a system of ODEs. This equals 0 shouldn't be here. I apologize. That's another typo. I move this term to the other side of the equation to where the 0 is and I solve. So I went from a first order equation, a first order equation in x to a second order equation in x. Two differentials of x with time instead of one differential of x with time. But I could convert it to an equivalent system of ordinary differential equations. Does that make sense? Do you see how that works? Any questions? Let's go back and look at an example. The mass spring system. So here's our ODE that we're trying to solve. I'm sorry, our DAE that we're trying to solve. Can you convert this to an equivalent ordinary differential equation using the approach I just described? Think you guys can figure that out? Try it. You can work together. [SIDE CONVERSATIONS] WILLIAM GREEN, JR.: Guys are either tired or friendships have been destroyed over the mid-term season. Maybe both. So let's compute the things we need to know. We needed to know the partials of this f with respect to the states and with respect to time. So partial f respect to x dot, holding all the other variables constant, is mx dot. Partial f with respect to x is kx. Partial f with respect to t is t. And I told you before that you could write this says dx dot dt is equal to df dx dot inverse multiplied by-- is equal to minus df dx dot inverse multiplied by df dx. This was on the previous slide, which just gives you conservation of momentum. This gives you the acceleration is equal to the force minus kx divided by the mass m. So we can solve either, the DAE or the ODE. Here's a more complicated example, but it works, well, it's more complicated. So a lot of times we do molecular dynamics simulations. We try to model molecules moving around. There we also want to conserve energy. There's usually a conserved quantity, something we're trying to hold constant. So suppose we try to do that. The total energy in this system of molecules, maybe it's 1/2 mass times the velocity squared. Now x is the position of all these molecules. Ad x dot is the velocity of all these molecules plus the potential energy, which is a function of f, a function of their positions. So the DAE representing this constraint is this. It's the same as for the mass spring system, where we've replaced the potential energy with a generic one. This actually isn't a well-posed DEA. We have one equation for, let's see, if we have n molecules in here we have three n states. We can move around in three dimensions. So this is a really ill-posed equation. If we try to apply the same procedure, then we say, well, this quantity still should be conserved. Over time it needs to be equal to zero. Then we can try to compute all these differentials. But we'll find out that partial f, partial x dot, the Jacobian of f with respect to x dot, that's just the momentum. That's not a matrix. That's not invertible. So we can't convert this equation to an equivalent system of ordinary differential equations. So it's just something pathological for a one-dimensional system. Instead, what we find is zero is equal to the velocity dotted with the mass times the acceleration plus the gradient in the potential. So minus the forces acting on the molecules. We know that if we were to integrate the equations of motion for the molecules exactly, the momentum balances for those molecules exactly, then this term here would always be equal to zero. And we'd satisfy conservation of energy. Of course, all solutions of ordinary differential equations require approximations. So as you move the molecules around, their velocities and positions will tend to drift away from the solution where this is exactly in balance and equal to zero. If you desire to do that in the molecular dynamic simulation then you better have a method that somehow respects this geometric property instead. That the velocities are orthogonal to m x dot plus grad v. And there is a set of ODE-IVP methods that do that. They're called symplectic integrators. And they integrate the equations of motion while exerting some control over the error in the total energy. So if you were just the take a ODE45 and try to solve this system of equations, over long times you would find that the energy drifts away from the place where it started. Even though your positions are being integrated with reasonable accuracy, over time the energy will drift away from where you started. The system will heat up effectively or cool down. That's undesirable if you're trying to do modeling. So instead, one uses the symplectic integrators, which are designed to control the error and the energy in addition to integrate the equations of motion. So you can look those up and read about them. I think they're quite interesting. If you're going to do molecular modeling you'd like to understand better how they work. But actually, conservation of energy here doesn't give us a well-posed DAE. We actually need the momentum equations to formulate or to determine the trajectories of the molecules. Let's talk about their numerical simulation. So let's think about these semi-explicit DAEs, the ones that can be split between differential and algebraic states. It's kind of useful to look at this problem in particular. And you might say, well, let's do the simplest possible thing. Let's do the forward Euler approximation. So let's take our derivative here and represent it as the state evaluated at x plus delta t minus the state at t divided by delta t. Let's make the right-hand side of our equation be evaluated at time t. And let's try to step in time, do time marching forward to determine the trajectory of x and t. How do the states change? We see the first equation is an easy one to solve because presumably I know x of t and y of t. Well, do I know y of t? y of t has to satisfy this second equation. So first, if I know x of t I need to solve this nonlinear equation determine y of t. Then I know x of t and y of t and I can step forward in time to determine x of t plus delta t. So every iteration here with the simplest possible approximation for the differential requires solution of a system of nonlinear equations. We already saw before that a forward Euler approximation is not a great one to apply to solutions of IDE-IVPs because it has very limited stability properties. You need small time steps in order to stably integrate in time. So even though this seemed like it was easy to get a solution for the differential equation, we've always got to satisfy this nonlinear equation here. So inherently, simulation of DAEs are implicit. They require solution of nonlinear equations in order to advance in time. There's actually no point in using this weakly stable method for any very small time steps. It makes more sense to choose a naturally implicit method for the differential equation, where I can take big time steps and still have reasonable stability of the integration. Does that make sense? Consider now the fully implicit DAE. So f of x, x dot, and t is equal to 0 and have some set of initial conditions. x of 0 is equal to x 0. Here there is no way in general to avoid solving systems of nonlinear equations. And so one substitutes often backwards difference approximations for x dot. And then you solve for the next point in time. So here's an example with the backward Euler, implicit Euler approximations. So I replaced the time derivative with the difference between the state at point tk and the state of point tk minus 1 divided by the difference between tk and tk minus 1. The other state variables are evaluated at tk. So x at tk. And I evaluate time at tk. And I try to satisfy this equation and use it to determine x of tk. So I solve this system of nonlinear equations for x at tk. And then I go to the next point in time, tk plus 1. And I solve the same equation, but replacing tk with tk plus 1 and tk minus 1 with tk. So at each iteration I solve a system of nonlinear equations. No more painful than doing the forward Euler approximation I showed you before. So that's good. But still a lot of work. We might wonder how do we control the error in these sorts of approximations. And that's what we're going to talk about next. So here's the equivalent time marching formula applied to a semi-explicit DAE. So I've got to satisfy the equation 0 is equal to the derivative of x with respect to time, which I approximate with the backward Euler approximation, minus f evaluated at tk. And the equation g of x at tk, y at tk for this x of tk. And actually y of tk as well. I'm sorry, I left that off. y of tk as well. Got to solve for the differential and the algebraic states simultaneously. And I use a backwards difference formula to try to do this in a stable way. One way to think about in particular these sorts of DAEs is as exceptionally stiff ordinary differential equations. Did you guys discuss stiffness? So imagine if I replace the 0 with, say, dy dt. And I introduced a constant on the right-hand side, a multiplicative constant, say, 1 over lambda here. Let's say lambda, actually. Say I introduce a multiplicative constant lambda on the right-hand side here. So as lambda becomes very large, the system becomes increasingly stiff. The dynamics of this equation take place over very, very short time scales. So short in fact, that eventually, if I let lambda diverge to infinity, I've just got to satisfy this algebraic equation at each point in time. So these are exceptionally stiff equations. And the methods for solving these equations share a lot in common with stiff ODE-IVP solvers. So I've got a couple of minutes left. I'm going to work through some examples. So consider the stirred tank example one. dc 2 dt. It's related to q over v, flow rate over volume. The difference in the concentrations coming in and out of the tank. And c1 is equal to this input signal gamma. Maybe that's a measurement and I'm trying to predict c2. So can you apply the backward Euler method to these equations to determine c1 and c2? What would one step of the backward Euler method look like, say, going from time 0 to time delta t, or time t to t plus delta t, or tk to tk plus 1, however you want to write it? So do you get something like this? So this equation is easy. c1 to tk is gamma of tk. I got a substitute up here. c2 of tk minus c2 of tk minus 1 over delta t. T Just tk minus tk minus 1. And then solve for c2 of dk. And you get a formula like this. This formula was based on an approximation for the derivative, which had a leading order error that was proportional to tk minus tk minus 1. When I go through and I solve for c2 of tk, the leading order error in c2, the local truncation error is going to be order tk minus tk minus 1 squared. So as I shrink the spacing between each of my two time points, the local accuracy, the error induced in one advance of this time stepping algorithm is going to scale like the step size squared. So I have the error-- I have the spacing. The error will go down by a factor of four. I reduce the spacing by an order of magnitude. The error will go down by two orders of magnitude. This is a really desirable sort of error control and a method. And we can do better. You've seen us do better with other methods. Let's look at this equation now. This is stirred tank example two. I just replace c1 with c2. You don't have to write this one down. I'll just show it to you. The notes are online so you're not missing anything. All these things will pop up in the notes online. So c2 at tk is equal to gamma of tk. I got to come up to my first equation now and I have to substitute an approximation for the derivative. It's this, the backwards difference formula. So if find c1 of tk is c2 of tk plus v over q. c2 of tk minus c2 of tk minus 1 divided by the time step. Remember, this approximation for the derivative has a leading order error that goes like tk minus tk minus 1. So my approximation for c1, the local approximation for c1, is not second order accurate. It's only first order accurate. I applied the same approximation method to both equations. One equation I got something that was second order accurate. The next equation I got something that was first order accurate. They look almost identical. It might be hard to see from the start why it should work out that way. That's how it works out. Let's do one more example. Here's a system of DAEs. c2 dot is equal to c1. c3 dot is equal to c2. And 0 is equal to c3 minus gamma. So gamma, again, is some measurement. We solve this equation to determine c3. We substitute c3 up here to determine c2. We substitute c2 up here to determine c1. But we want to find the solution with the backward Euler method. And the solution there is going to be c3 of tk is gamma of tk. c2 is related to the derivative of c3. So I use my backwards difference formula for the derivative of c3. c1 is related to the derivative of c2. So I use my backward difference formula for c2. But how accurate are each of these approximations? So here I have to determine c2 from an approximation for the derivative of c3. We know this backwards difference approximation has a leading order error proportional to delta t, the time step. Now I got to approximate c1 with the finite difference backwards difference approximation in c2. We know this should incur an error proportional to the time step delta t, except I don't know c2 at tk exactly. Actually, c2 tk, the exact value of c2 at tk is this plus something proportional to the time step. Which means the exact value of c1 is this plus something that's order one because I carry over the error from my solution to the previous equation. And I divide it by the time step. So I went from having something-- well, let's see. The first equation had order delta t squared local truncation error. As I shrink the spacing, my solution gets more and more locally accurate. And it seems to do it in a fairly robust fashion. The next system of equations we solved had a local truncation error proportional to delta t. Again, I can shrink delta t and I can make my solution more and more accurate. That seems like what you want in a numerical algorithm. This one, there's no hope. Change delta t, who knows what you're going to get? And this looks fairly innocuous. You just look at it and you say, well, OK. Let's solve it and see what happens. Well actually, you can't solve this problem accurately with the backward Euler method. There's something fundamentally different about this problem from the previous one. There's something fundamentally different about stirred tank example two from stirred tank example one. Right you might have already perceived that. You can see what's going on here. c3 is equal to gamma. That's the exact solution. c2 is equal to gamma dot. And c1 is equal to two derivatives of gamma, the exact solution to this equation. So c1 is incredibly sensitive to changes in gamma. And that's peculiar. And it has important numerical consequences. So I'm going to show you, our next lecture on Wednesday, how to formalize an understanding of the differences between these problems. And we'll talk about some more subtleties associated with differential algebraic equations. You guys can pick up your quizzes. We should do it outside rather than inside because the next class has to start. |
MIT_1034_Numerical_Methods_Applied_to_Chemical_Engineering_Fall_2015 | 35_Stochastic_Chemical_Kinetics_2.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. W WILLIAM GREEN: So let's get started. This is my last lecture of the class, and I want to thank you guys. This has been a really good class. I really enjoyed the good questions on the forum especially. So I don't know if you guys enjoyed it, but I liked it anyway. So I wanted to give-- my lecture here is going to be a wrap up on the stochastic methods, and then professor Swan's going to give a review on Wednesday. It will be the last lecture of this class. And then the final is a week from today, I think. Morning? Yeah, morning. All right. All right. So as you did in the homework and we talked about, there is a lot of multidimensional integrals we'd like to be able to evaluate. And a lot of them have this form that is a probability density time some f that we're trying to evaluate. That's the integrand. And then I didn't draw in the millions of integral symbols around it. But usually x is a very high dimensional thing, so you have to integrate a lot of things. And often we actually don't really know p. We know a waiting factor, w. So I just drew three of the integrals there, but there might be 10 to the 23rd integrals there. So w is a lot easier for us to deal with than anything with all those integral signs. So for example, the Boltzmann distribution is one of these things. And also, just recall back earlier, we were talking about models versus data. The Bayesian analysis of experiments says that you take your likelihood function, the likelihood that you would have observed the data you did if the parameters were true for certain value, theta. And then you multiply that times the prior knowledge of the parameter values, and that gives you the weighting factor really for any integrals involving parameters, because it's really giving you the w of data given everything you know. Prior information, and you also know the new data you measured. And put them all together, you get the weighting factor for the parameters. And so you can evaluate all kinds of multidimensional integrals over all the parameters in the problem. So very often you might do experiments where you maybe have four or five or six adjustable parameters, theta, some of which you know something about ahead of time, and you want to put them together this way. And that should be a summary of everything you know. But it's sort of a goofball summary, because it's a multidimensional function, right, a function with a lot of variables. But now you know how to do integrals of functions with lots of variables. So you can compute things like some linear combination of-- what's the expectation value of some function of those parameters. So if f, f here was a function of theta-- why is it bouncing? OK. I wonder why you guys are always look like this. All right. So f of theta, of any function you want, you can evaluate-- it's a function of the parameters, like prediction of what will happen in some new experiment that depends on the parameters. You'd have it as f of theta, and then you could integrate it over w of theta to get the expected value of the result of the new experiment. And also actually the distribution you have to get that way. Does this make any sense? I see one person thinks it makes sense. OK. So just because we did Monte Carlo using Boltzmann, which is a common one to use, there's other weighting factors. I guess that's my comment. And anytime you end up something that has some integrals with weighting factors, then you should think, oh, I can use Metropolis Monte Carlo. And also, anytime you have experiments, a lot of times you think of the, OK, I want to get the best fit least squares approach, which is what you already knew before I taught you anything. But it's very important to remember that you can also write the actual probability density, the shape of the probability density the parameters. And that way you can get all the correlations between the parameters and include all the previous knowledge of the parameters and stuff like that. So it's a very important thing to remember from this course is this Bayesian formula. All right. But Metropolis Monte Carlo, as you saw in the homework, is not always that easy. So you need to choose the step length in every dimension. It's sort of the factor you're going to multiply times random numbers to figure out how far to step from one state to the next. And it's not so obvious how to choose that before you start because you may not know that much about the shape of the integrand, right? Actually, even with the strength of the p in the probability distribution, you might not be that sure about it. And so if you accidentally choose it too large, what will happen is it tries to take big steps away from a probable state and ends up at really improbable states, and then most of the times it won't accept that transition, right? Metropolis Monte Carlo will just repeat the same state over and over again. So if you see you just keep sitting there getting the same value over and over and over again, then that's warning you that you must have chose your stepsize too large. Alternatively, if you choose it too small, all your steps will be accepted because you're not moving anywhere. So you're basically staying at the same point over and over again, except it's slightly different, because you took a little tiny step. But that way you won't necessarily sample the whole range. And you can see that by plotting. They say the distance from the initial state position. You initiated the Monte Carlo Markov chain at some state. And if you plot probability distribution of density, the distance from all the new states to the original state, and if that number is super teeny tiny, then you might be a little concerned that you really didn't cover the whole space. But again, you have to know what is super teeny tiny, so you have to have a range in mind. Yeah? AUDIENCE: Can you do this adaptively? WILLIAM GREEN: Yeah, yeah, yeah. So good algorithms for this try to pick it up that there's a problem, like if it keeps on either not accepting any states, it would try to shrink the step, the delta. Or if it's always excepting the states, then it might try to increase it. You want accept actually most of the transitions because it makes it more efficient. But you don't accept them all. So you want to do some-- I don't know-- keep it to 0.9 probability or something like that, or 0.8. And there is probably a whole journal paper about what's the optimal way to do the adaptive stepsizing to get to convert the rest. Yeah? AUDIENCE: How did you prevent your initial guess [INAUDIBLE]?? WILLIAM GREEN: Well, to me, I don't think that throwing them out is a good idea. I don't know. People do it. I think it's that your really just need to make enough steps that you are sampling everywhere, and then it shouldn't matter. And a good thing to do is actually start from a completely different initial state and make sure you get the same value of the integral, and that will give you a lot more confidence that you're not sensitive to it. If you're sensitive to it, you're in trouble. Because you really won't know how many to throw away. And you don't know if you actually even achieved the real integral. I think in the hydrogen peroxide example, actually it was symmetrical, so it was a little bit didn't matter too much. But it I had given you an asymmetrical one, there's like-- here's like the dihedral angle. It has to do with the orientation of the h, like sort of out of the plane. And it should be symmetrical that you can sort of plus and minus, and there's a high probability region over here and a high probability region over here. And you start one of them, say, over here, and you just sample around here, you get a lot of points all have basically the same value dihedral angle. And you never see any of these, then it's no good, right? Now this particular case, you might still get the right answer by luck, because they're exactly symmetrical. If this one is a little bit asymmetrical, then you'd get the wrong answer. Does that make sense? So you really want to make sure you're really sampling over all the physically accessible region. But again, this is a common problem for us all the time, is that when you're doing a calculation, you want to know what the answer is before you start. This is a absolutely critical thing. Because how the heck you going to know if you have a bug? Right? The computer's going to give you some number at the end. You have to know actually what the answer is before you start. I know this seems weird, right? But anyway, it's the truth. If you're going to calculate anything, you really want to know what the answer is roughly. What the units are, what the order of magnitude is. Some things about it. And ideally you should try to think of some simple test you can do to try to check whether your code's right, whether things are reasonable. The reasonableness test, is it reasonable that what which you get. So this is the same kind of thing. Like you draw the H2O2 and say, oh, I think the dipole moment should be about such and such, because I know the typical charges on an H atom and O atom. And if you get some number that's way off from that, then obviously you made a mistake somewhere, right? It could be a mistake in your code, or it could just be you just didn't sample correctly because your delta was wrong, for example. All right? Now I know I've told this to students every time, and every time I tell this to students, they totally reject this idea that you should know the answer before you start. But it's absolutely critical. I mean, my whole job as a professor is I tell my students, please calculate this. But before I tell them that, I know what the answer is, right, roughly. And so that way when they come back and show it to me, I can say, oh, it's reasonable, OK, I believe them. Yeah. So I think the number should be 20 and they come back with 14.3, I say, 20, 14.3, they're pretty close. OK, so I believe them. And when I think it's 20 and they come back, it's 10 to the minus 6, then I'm pretty confident, I tell them, I think you must have made a mistake. And they're like, no, no, no, I did a great job. I'm sure my calculus is perfect. And then I'm like, now we're really probing, and I think I'm like-- I don't know-- I have it in for them. But no, actually I knew what the answer is, so obviously it can't be right. So they must have made a mistake. Anyway, you should be like it, too. You want to be able to be self-critical about what do you expect your answers to be before you do them. All right. What else is problematic with Monte Carlo? This is very problematic. If you want to achieve high accuracy, you need a really large number of states. And it's funny. It's like you can get a pretty good accuracy with not very many samples because of the behavior of one over square root of n. So it has very big changes, that small values of n. And just a relatively small number of samples, you get something that's halfway reasonable. But then if you have three significant figures and you want to get the fourth one, it's a killer because you have to do 100 times more effort, 100 times more samples to get one more significant figure. And so you've already done 100,000, and now all the sudden you have to do 10 million samples. And then you only get one more [INAUDIBLE] of that. Now you have to do a billion samples, and it's like, this gets really out of hand. So that's a really unfortunate thing about it. But the nice behavior at the beginning is you get roughly-- if you only want a few significant figures, you can get them pretty darn fast with Monte Carlo, certainly a lot better than trying to do the trapezoid rule in multidimensions or something like that. All right. So you guys just did Monte Carlo. Are there things I should tell you about or we should talk about, problems you ran into? Now you see, when I set that homework problem up for you, I helped you out a lot. I don't know if you noticed that. So the original problem had, it's four atoms, they each have three xyz positions. So there's 12 coordinates. And then I did clever tricks to get it down to, I think, six, maybe. OK. So going from six dimensions to 12 dimensions, that's actually a pretty big deal. And so if I have given you the original 12-dimensional problem, you could still compute it and actually use the same kind of code as you wrote. But the number samples you'd need to get a good number gets really wildly different. Right, and so again, this has to do with knowing the answer ahead of time. You know, I knew, OK, that the magnitude of the dipole moment doesn't depend on the orientation of the molecule, so therefore, I'm going to get rid of all the rotational degrees of freedom. I know it doesn't depend on the transitional position of the molecule, so therefore, I get rid of all those. And so before I even did any calculation, I can tell you I can get rid of six degrees of freedom, so that'll help you a lot. But it's a similar kind of thing. You want to do that all the time yourself, is try to think what can I do that's easier than just doing the brute force problem? Yeah? AUDIENCE: So on that problem, changing the max stepsize, you could change the percentage of step-- or [INAUDIBLE].. WILLIAM GREEN: Yeah. AUDIENCE: And I think the mean state roughly around the same, but the shape of distribution actually changed a little bit. WILLIAM GREEN: OK. AUDIENCE: So how is-- WILLIAM GREEN: So I think it might be this kind of thing is one part of it. It's like if you only integrate one lobe of the dihedral, you actually get a dipole moment value that's pretty reasonable, even if you totally missed this other lobe. But if you did a different stepsize, you might start getting some samples over here, too. So the distribution would look a lot different, but you still end up getting roughly the mean. But also it's partly that the-- the good thing about the Monte Carlo method is that, the first few samples, no matter what they are, give you something that's on the order of magnitude of what the value of the thing is. So even a really lousy sampling at the beginning still gives you something that's halfway reasonable for the average. AUDIENCE: So if you're trying to recreate the overall distribution histogram, then how do you know which max stepsize to choose, because [INAUDIBLE]. WILLIAM GREEN: Yeah, so the rule of thumb I've heard is that people try to get an acceptance ratio between 0.2 and 0.8. So it means you want-- Yeah. But I really don't know. I'm not an expert in this field. But I'm sure you read papers and people have big discussions about what the acceptance ratio should be in order to taking big enough steps that you have a chance to get the weird stuff. In this kind of one, it might be a problem if you're doing steps, say, in the dihedral, you may have to step quite a long distance to find the other lobe. So sometimes having some pre-knowledge of what you think the shape of the things are is a big help. Yeah? No? Sorry, maybe I didn't answer your question. Is that-- AUDIENCE: Yeah. WILLIAM GREEN: Yeah. All right. All right, so that's Monte Carlo. To Then we talked about the more difficult problem of where the probability distribution is not stationary, and in fact, we don't even have the w written out. All we have is a differential equation that divides the probability distribution. And we did it here for the case of discrete states, and that's the right one to use, for example, for the kinetics equation, if you want to get them exactly right. Now that differential equation looks really easy, right? Just a the times p. It's a linear differential equation system. You knew how to do that one probably in your second semester of undergraduate taking differential equations class, right? They did dialyze the matrix. Yeah? Remember this? So that looks really simple. But the problem is that the number of states is so big. So have any of you tried problem two on the optional homework? You haven't tried? Has anyone even looked at it? I know somebody looked at it because I got some question about it. All right. At least one person looked at it. Let me tell you guys briefly what this problem is. It's a really relevant problem. If you go work for Professor Jensen, Professor Braatz, maybe Professor Roman, you might ed up doing this calculation. This is not a crazy calculation. It's very simple. All they have is like a chess board of sites on a catalytic surface. And they some number of sites. And this is what-- if you take a crystalline catalyst and cleave it, you'll have a whole repetitive pattern on the surface, and somewhere along there is some active site that does something. You can tell that experimentally because you stick that catalyst in the presence of some gases, you make products that you didn't make when it wasn't there. So it did something. So the way people model this is they say, OK, there's some active sites on here, and there's probably one of them per unit cell maybe. And the site can either be empty, like if I ultra high vacuum, I pump on it and heat it, should be nothing there. And then if I expose it to a little bit of some gas A, some of the sites might have A molecules sticking on them now. And I also exposed to some sites gas B. Maybe one of these sites might have a B molecule on it, too. And if A and B can react and make my product, C, then maybe they're sitting next to each other and they might react with each other. OK, so this is the whole way you look at this problem. And so all I want to know is, if I have some A's and B's sitting on the surface, sort of how often are they going to react? And the problem is a little bit more complicated. It says, well, suppose I also know if I put a whole lot of B on the surface, what I end up seeing is coke. My whole surface gets totally coked up. So let's model that by saying, well, if suppose two B's are next to each other, then some probably reacting to turn to some coke product that sticks there permanently and just poisons the catalyst. OK? And this is real life, too, right? So if you're trying to run a catalytic process, a lot of times, you have to run with one reagent in great excess to keep the surface kind of clean, keep it covered by the unharmful reactant. And then you let little bits of the other reactant come down and react with the A really quickly. But you don't let too much of the second reactant come down, because it might cause a problem like dimerize or coke or something. OK, so that's the model. Some of these guys have A's, some of them have B's. And in the unlikely case that two of these guys react, they both turn around and then turn them into S's. Coke. Soot. And then that part's dead forever after that. OK, so that's the model. So in the case that they have, they had 100 sites. I think it was 10 by 10 if I remember correctly. So it's 100 sites, and each site can either be empty, or have an A molecule, or have a B molecule, or have a coke molecule, S. So there's four different states on each of 100 different sites. So how many states are there altogether? Its' four to the what? 100. OK. So that's a pretty big number. So that's how many states. It's about 10 to the 60th, I think. All right. So this is a very large number. This is bigger than Avogadro's number. This is really a lot of states. You wouldn't have thought that just a stupid little problem with just a 10 by 10 piece of catalyst, and three different things that can stick there, would give you so many numbers, but it does. So now I have a problem that if I have-- my p vector is now a probability that has a number for each of the 10 to the 60th possible states. And then the matrix m is that number squared, right? Dimension. So it's 10 to the 120th power elements inside the matrix m. So even though the matrix m is very sparse, this might still be a problem, because 10 to the 120th is really big, OK? And your computer can only hold 10 to the 9th or 10 to the 10th numbers in it, so this is going to be a problem. And also that's not just what p is. That's what p is at this instant, say, 1 millisecond after t naught. And then if I wait to 1.1 milliseconds after t naught, p will be different. So I have 10 to the 60th numbers that change with time in some way that I don't know. OK? So this is actually a really hard problem to solve it. And so everybody uses Kinetic Monte Carlo to do it, because there's no way I can possibly even list all the elements of p. And in fact, if I sample, if there's 10 to 60th states, no matter how good my sampling algorithm is, I'm never going to sample 10 to the 60th states. So there's a lot of states I'm never going to get-- not even get one sample of. I'm just never going to encounter them at all, because there's just too many states. But anyway, people do it anyway. So we go ahead and we'll do the Gillespie algorithm, which you guys-- we talked about in class. Now to compute a Gillespie trajectory, we have to compute two random numbers, right? We compute one random number, it tells us how long we wait until the next thing happens. And then we compute a second random number that tells us which of the many possible things that happened actually happened. And so if I have a case like this where I had the-- to the original case. What can happen, the A could react with the B, the B could react to the B. The A could come back off. The A can move to the next adjacent site. The B can move to the next adjacent site. A lot of things can happen, right? So quite a variety of things can happen from this initial state. And so you have to, if you're solving this problem, you have to write that down. Yeah, John Paul? AUDIENCE: So the special waiting times are [INAUDIBLE] process where we haven't decided there's going to be an arrival, [INAUDIBLE] arrival times by that [INAUDIBLE]. WILLIAM GREEN: Yeah. AUDIENCE: And then we make something happen. But I mean I feel like you could run the exact same simulation without counting the times. Get the same answer and then after the fact come and [INAUDIBLE] into the system. [INAUDIBLE] doesn't seem to be [INAUDIBLE] WILLIAM GREEN: Yes, that's right. Yeah, that's true. So you only do the time calculation because you care about the time. If you don't care about the time, you only care about the steady-state solution, then you can probably do it some other way. AUDIENCE: You don't need to calculate the times inside that. You could calculate the entire state space trajectory without any reference to the time, yeah? WILLIAM GREEN: Yes. Yeah, so you can get the sequence of all things that happened if it didn't give you the time, because it doesn't matter, right? For the time. It just means the sequence of what happened. AUDIENCE: Yeah. WILLIAM GREEN: The sequence of states is all that matters. But if you want the kinetic information, then you also want the time, because it might be it sat in one state for a million years, and the other state-- AUDIENCE: [INAUDIBLE] WILLIAM GREEN: Yeah. Yeah, that's right. Let's go ahead. OK, so maybe make this cheaper. Make it one random number for j. No? OK. All right. So things to just keep in mind about this. So the cost here is to compute in random numbers. We have to compute at least-- well, some number of random numbers. I don't know how many. And that number depends on the length of time I want to simulate and what my delta t is. Where the delta t is sort of like the average time between the events happening. It's like one over a in the [INAUDIBLE].. And so if that delta t is very small, and my time I want to simulate is very large, that means that each trajectory is going to have a lot of events. And that means I'm going to have to generate a lot of random numbers. And so generate one trajectory is going to be really expensive. On the other hand, if the delta t were about the same size as the total time I'm simulating, then I might only get one or two events, and so I only have to generate four random numbers for each sample. All right? And I just have to keep in mind that I'm going to have a lot of low probability states that I'm not going to sample at all. I might even have some high probability states I don't sample, because I just can't do enough samples. And I had the same exact problem I have with the Metropolis Monte Carlo and with all these stochastic methods that the scaling is one over square root of n. So I can get a rough idea pretty easily, but I don't get a really precise number of anything, then I'm going to have a problem because I'm going to have trouble to do enough samples to really refine the number. The key thing in this is that with anything like this where I'm going to have so many states, I'm going to run a lot of trajectories to get at all-- sample all the things that can happen. And so I'm probably not going to be able to store all the trajectories I ran on my computer, because I'm going to run so many. And so I really want to compute things on the fly, which means I should decide ahead of time all the averages, all the f's I'm trying to compute. So I might only compute f and f squared. And I don't know, whatever else I want to compute. The average value of the number of A's on the surface. I mean, there might be a lot of diagnostics I can compute to just check everything's OK. Think of that beforehand, coded to compute them as it runs, and then we only have to store the running averages of all those quantities, and I don't have to store anything about exactly which state sequence I hit. Is that OK? All right. Now if you remember, in this equation, there was an initial condition of the right-hand side, which we totally ignored so far. But that's actually pretty important. And in a lot of real cases, from your macroscopic continuum view of things, you have an idea of the average value events. And what you don't know is about the discreteness and you don't know about the correlations. All you know is sort of an average. I expect to have two A's and three B's in the surface or something from the partial pressure of A and B and the binding constants of A and B. So I have some idea. So I think I know these averages. And so what people usually do is they use a Poisson distribution for the probability of the exact number of A. So I suppose I think on average I'll have two A's. , Well, I might have one A, I might have three A's on the surface. And so I'll just use the Poisson distribution of n to get an estimate of sort of the expected width of distributions. And so that's the formula for the Poisson distribution. You have to know the average values of Na, Nb, Nc. And then you can see that the N appears in the-- on average, N appears in the exponent and in the factorial [INAUDIBLE].. Right? And so really every time you start a new trajectory in Kinetic Monte Carlo, what you should do is sample from the Poisson distribution to get a new initial condition as well. OK? And that way you're sampling over both all the possible initial conditions and all the possible things that can happen to them to really get the real average of what you might want. All right. Now we just realized how horrible this problem is because it has so many states. And so then we're going to have to think immediately, what can I do to make this faster and easier? And so here are some things people do. One thing is that people try to figure out what are the really fast processes, and do I care about them. And some of the fast processes, you might say, well, like diffusion of A moving to here, and then it moves to here, and then it moves back to here, then it moves back to here. That's of no interest to me. I don't care where the A is bound on the surface. All right? The only way I care about it is whether the reacts of the B when it's sitting next to it. In other respects, I really don't care about the diffusion. So having the time concept for the diffusion be the real time concept might not be that important. So then you can do different things. One thing is you can assume it's infinitely fast, and you say, well, every time I look at a site, I assume that this A has a 1/10 chance of being here or here or here or here or here, all these different spots, as if it's equilibrated around all the empty sites. That would be one possible way to do it. That's the infinitely fast diffusion idea. Another idea is to say, well, let's slow down the fusion to make it slower just to help out our computation. So I say, well, on average, I might get one reaction a millisecond. In reality, the diffusion time is a nanosecond. But I don't care about all that nanosecond stuff. Let's pretend that it's a tenth of a millisecond. So it'll still be pretty much equilibrated on the time scale on the reactions, but that way I'll be able to accelerate my calculation by seven orders of magnitude by going from a nanosecond time scale to a tenth of a millisecond time scale. So there's a lot of different tricks like that. If you read a paper that does Kinetic Monte Carlo, you've got to read exactly what they did. But they usually do something like this, because it's just totally out of hand if you try to model everything perfectly. Also, the low probability events, the really low probability events, you're never going to sample. So if I have a process that happens on my catalyst that takes 10 hours to happen, and my main reaction happens on a millisecond time scale, I'm never going to be able to run out to 10 hours. So I might as well just forget it. So if I know experimentally that the coking thing only happens on 10-hour time scale, I'd just take it out of there. And I don't have to clutter up my calculation with all these S's. I'm not going to form any of them in my time scale anyway. And so therefore, I cut the number of states. Instead of being 4 to the 100th, now it's 3 to the 100th because I got rid of all the coke. And so now I've drastically reduced the size of my problem. Not often you can do a reduction like this. That's a pretty big reduction in the size of a problem. All right? And then you have to have an idea of what adequate sampling is. So you're going to see some lower probability processes compared to some higher probability ones. And you have to know, when are these low ones so small that they're statistical noise. And this is the margin of error problem. Do you ever see the polling, like in the political polls, they always say, so-and-so many people believe in evolution plus or minus something, right? Well, the way they get the plus or minus is from the square root of the number of samples with a positive result. So the margin of error on low probability things is much larger. So the number of people who believe that Professor Greene is God is a very small number. So if you find somebody like that, you have to figure they're within the margin of error, right? OK? But the number of people who are going to attend 1034 this morning is a big enough number that the margin of error in that is maybe two or three people. It's not going to be 100 people. That make sense? All right. So the more likely the event is, the more likely you'll get a lot of samples of it if you count the errors roughly the square root of the number of samples you got of that event, then the statistical error in your sample of the high probability events won't be that large. Whereas for the low probability events, it could be really gigantic. All right, so just for an example. In the main reaction, A plus B is really fast because it's a good catalyst. So I'm going to get a lot of trajectories going to show that reaction. So that's good. And I should get good samples [INAUDIBLE] But the coking reaction is really slow. At least I hope it's slow, otherwise the catalyst is no good. And so if it's really slow, I might not get very many of those guys. And so even if I left it in the calculation, I might not be able to reach any conclusion because I might only see one when coking even out of all of the 100,000 trajectories I run. And so then I won't know whether to say anything about that or not. And then if I let the diffusion in and it's too fast, then my delta t is going to be too large, which means that my CPU time to compute a single trajectory is going to be too large, which means that I won't be able to get good sampling because I won't be able to run very many trajectories. So I might want to do something to get rid of that. All right. Now there's another method that a lot of people use. I think actually Professor Swan uses this sometimes. Is that correct? Yes. So this is another method. And what it is, is you solve the equations of motion of the atoms or clumps of atoms typically using Newton's equations of motion. And typically people use it using force fields that were fitted to some experimental data, and maybe with some quantum chemistry data as well, to get some idea of the forces between the atoms and the force with which the molecules, when they bump into each other, how they interact. And typically, it's done classically using Newton's equations of motions. But if you don't like that, you can put in the quantum mechanical effects in a couple of different ways, and my groups, worked them one way called RPMD. And you can get pretty good agreement with the quantum chemical results by doing this fancier version of molecular dynamics. So there's some different equations you solve, but basically the same. And so you can do it. And there's a nice algorithm called the velocity of Verlet algorithm that almost everybody uses in this field. And what's nice about that one is that it can do a lot of steps. A lot of steps of moving the atoms around. And after you do a million steps like that, you compute the energy by adding up all the potential energies and kinetic energies of all the atoms. And it'll still be about the same energy as you started from. Whereas a lot of methods, if you do the integral like that and you calculate the energy at the end, it won't be the same as energy you calculated because the methods have little round-off errors, and they kind of accumulate in a way that messes up the energy. And so this particular algorithm is a nice O to E solver method that is good for-- has a property that is good for conserving the energy. And then a lot of people use a thermostat, because they care about samples that are in contact with a thermal bath. So you're sampling a few molecules, maybe 100 molecules or 1,000 molecules, but not 10 to the 23rd molecules. So you have your 100 molecules or your 1,000 molecules, and you pretend that they're in contact with some bath, and you're watching those 100 molecules wiggle around, and their energy is not exactly conserved because they're exchanging energy with a thermal bath. And so there's different things called thermostats which are computer ways of adjusting the velocities of the atoms periodically as if they got a kick from the thermal bath that makes them go up or down. And that's very important trying to do, say, chemical kinetics, because their reactions are so slow that most of the time you'll watch the atoms wiggling around, nothing happens. You need the unusual case when you get a big kick somewhere that gives you enough energy that you overcome a barrier to make a reaction happen. All right. So that's what molecular dynamics is. And numerous people in the world and in this department and on the campus do these kind of calculations. And there's two ways that they're used. One way is used as an alternative to Metropolis Monte Carlo. So you're trying to compute basically multidimensional integrals, basically it's from statistical mechanics integrals, more or less. And instead of using Metropolis Monte Carlo, you decide to use molecular dynamics. This is a tricky choice between those two options. Nice thing about the molecular dynamics is I didn't have to choose any stepsize, basically, right? it's like the time scale was set by the real physical motions of the atoms. And if I don't want to think about it, I can just put in, what's the real physical time scale of the vibration of some atoms. And I don't have to think about it at all and just run it. But something should be a little bit thinking about is the molecular dynamics equations we're solving, it's really a time accurate method. We actually get real time dependences. We're using the real physical time, which turns out to be pretty darn fast. So molecular vibrations or time scales are like tens of femtoseconds. And so that's really fast. And so you have a really tiny delta t. But if you're trying to actually compute a steady state property, then maybe this isn't-- it's not necessarily the best way to do it, OK? Because you're doing something where you're doing extra effort to keep the time accuracy. It's sort of along the lines of John Paul's question. If you're doing Kinetic Monte Carlo, if you didn't really care about the time, then you don't have spend time computing the time, right? And the same thing here is if you don't really care about the time, then you might want to use the Metropolis Monte Carlo instead, because if time's not really in your problem, then you can tailor it, take steps that don't really have to be physical or related to physical amounts of time, and you can still get the right integral. Whereas here, it's going to necessarily do things that are exactly the physical amount of time. And some processes in the real world are pretty darn slow, at least compared to 100 femtoseconds. So you might have problems trying to compute by a time accurate method. Now on the other hand, sometimes you want to compute time dependent properties. And this is more or less an exact simulation of what the molecules really do on the time scale that you're simulating. So if that's what you care about, like what's happening on the time scale of picoseconds and nanoseconds, this might be exactly right, because you're actually simulating with a tool that's time accurate on the time scale of what you're really trying to measure. And so some chemical reactions, some kinds of energy transfer processes, like Professor Tisdale's group has exciton transfers that are happening on picosecond time scales. It's tailored to that kind of problem. OK, but this is the limitation. So you have to use a very small delta t. Therefore, your total time is typically limited to nanoseconds as far as you can integrate, because you have to-- if you just count how many time steps you would need. So if you're trying to determine some kind of static equilibrium property, you start from some initial gas at the positions of the atoms and the initial velocities, and then physically, it takes some time scale for that initial gas to relax to the real equilibrium, because you don't know the real equilibrium situation of the system. And that time scale, if it's longer than nanoseconds, you're in trouble, because it's not going to be done before you've done the calculation. And you never have even achieved the equilibrium situation. Also, if you have a situation like that hydrogen peroxide case we talked about, suppose it takes a millisecond for the hydrogen peroxide to change confirmation from the dihedral angle being one way and to be the other way. If I can only say, well, simulate for a nanosecond, then I'm never going to see that happen. So I'm never going to jump from the one conformer to the other. On the other hand, if I care what happens inside the one conformer, everything inside there probably happens on nanosecond time scales, and so I'll get the really good sampling of what's happening inside the one conformer. And similarly for the dynamic processes, if you have a process that happens on nanosecond time scale, this is really the ideal way to do it. If you have a process that's happening on millisecond or seconds or hours time scales, then this is not really the way to do it at all. And then there's an initial condition problem with this. It's kind of related to the Kinetic Monte Carlo initial condition problem. So in the Kinetic Monte Carlo, we had to sample over all the possible initial conditions. We did it with the Poisson distribution. There is a similar issue here, is how do I start the initial conditions? Where do I arrange the atoms to be to start out? I really want to sample over all the different possible ways the molecules could be arranged. And particularly if I have some-- say I have a protein. And the protein has some conformation it likes to sit-in and then it can unfold and go to some other conformation. Maybe there's two or three conformations like this. The time scales for those changes, again, might be milliseconds. I'm not going to be able to follow them with the molecular dynamic, so I need to have some sampling method to set me up in each of the different conformations that I want to sample. And then I can follow very accurately what would happen over a couple nanoseconds after it's in that confirmation. But I'll hardly ever see it actually achieve the other confirmation on the time scale. So I guess what I want to say to you guys is these are different tools for really different purposes. The Metropolis Monte Carlo, the Kinetic Monte Carlo, and the molecular dynamics, they're all good for some kinds of problems. But none of them is good for all problems. And I often get journal papers where somebody uses the wrong method for the wrong problem. And so I have to reject it. I've often had people come that want to do postdocs for me. And they're talking about their thesis, and I see the poor kid has spent five years of his life using the wrong tool for the problem he's doing. It's very sad. So don't that be you, OK? So don't just use a tool because you know it. Say, does this tool work for this problem? If not, I've got to find a new tool. And just make sure you're using tools that match the problems you want that you're trying to solve. I think that's all I got. So I have 10 more minutes left in class. Any questions you guys have? Yeah? AUDIENCE: Just have one on weighting functions for the Monte Carlo simulations. I'm still a little shaky on how you always determine them. It seems like it was given to us on the problem we did on the homework and then in the example in class, you just set it to an even distribution, or a uniform distribution? WILLIAM GREEN: Yeah, yeah, yeah. AUDIENCE: How do you choose what's best, how do you know [INAUDIBLE]? WILLIAM GREEN: Yeah. Well, yes. So you get some integral. This integral of g of x something. All right? Where this is a lot. And then you have to figure out, what am I going to do so I can solve this thing? And the clever thing is to figure out, hmm, can I rewrite this as p of x times f of x, because I know how to solve those kind. And actually, I don't even-- Yeah, and even if I don't know what p is really, maybe I can write it as w of x over some number of integrals here. That will be my p of x. So most of these things you can do with uniform distributions if you want. But then your sampling can be extremely inefficient, because you'll sample a lot of regions that have very low probability of being really there. But this is like a cleverness thing about can I figure out what's that p times f that's equal to g that's going to work the best for me. And work the best, that's a good question. So one way of working the best is, if f is most constant, because if f is perfectly constant, then I get the right answer in the first sample. OK? So that's one thing is trying to figure out f to be really pretty constant. The second one is to try to figure out p that's very sharply focused. So I don't need to sample a lot of x values to get it. Now these two are kind of at odds with each other, the sharp p and the flat f. Probably again, probably whole people wrote their PhD thesis in applied mathematics about what's the optimal choice of p and f. A lot of times, I'm not that smart, so I just like, if I have a problem in stat mech, I just-- I'll always do a Boltzmann factor. It may not really the best thing to do, but that's what I would do, right? And if I was doing a Bayesian problem, it's sort of like the w is given to you, right? Where's that? That's the formula I know, so I'm going to use that w. Maybe there's a more clever way to do it, but that's what I normally would do. Actually, one thing that I don't know if we've ever talked about explicitly, but very important to know, is that this kind of formula, these formulas have w of theta. That's the joint probability that theta one is something, and theta two is something, and theta three is something, and theta four is something. It's that probably density. But a lot of times, you don't care about that level of detail. Like you may have only a few of those parameters that matter to you. So you really care, you would like to know p of-- I don't know, theta one, theta two. And you'd like to get rid of all the rest of the thetas, because these are two thetas you really care about. Maybe these are the ones you think you're controlling in your experiment, you're trying to determine. In the other ones, somebody already measured the mass of the proton, so you really don't want to determine the mass of proton again, and you're not going to really say anything about it even if you did. If your calculation says that it made the mass of proton a little bit different than what the standard book says, you might believe that in your heart that it's true, but you're probably not going to say it, because you're like, I better double check before I do that. But I'm sure that it determines the length of my reactor, because I measured that with a meter stick, and nobody else knows that number, so I'm sure that's the theta I got that I'm really going to be in control of here. So oftentimes you do this. You want theta one, theta two, but you actually know theta one, theta two, theta three, quite a few of them from my formula. And so you can do what's called a marginal integral. Suppose I have-- I don't know-- theta three, theta four. I could do D theta three, D theta four. And this is like integrating out these degrees of freedom to get the probability density that I want, all right? So if you have some case where you all you care about is the variance of something, and you don't care about all the rest, you can kind of integrate them out. That's a very handy trick to do. Well, you can do similar things with the Boltzmann things, like for example, a lot of the Boltzmann distributions, we don't actually care about the momenta, because what we measure, say, is like a crystallography thing. We see positions of atoms. We don't see velocities anyway. So a lot of times, people will just integrate all the velocities out of the problem right at the beginning. It depends on what you want, right? You're in charge of your problem. Any more questions? Yes, Kristen. AUDIENCE: OK, well, I just have an announcement. WILLIAM GREEN: OK. AUDIENCE: [INAUDIBLE] today at office hours, we'll talk about KMC, probably from about 5:00 to 5:30, but come any other time [INAUDIBLE] questions. And we just posted a poll for the final review session. It's either going to be Wednesday evening or Friday morning, so if you could vote on that as soon as possible, that would be [INAUDIBLE]. WILLIAM GREEN: OK, got that? So final review is either Wednesday evening or Friday morning, you have to vote. And if you come today, 5:00 to 5:30, it'll be all about Kinetic Monte Carlo. And other times today are about anything you want to talk about. And the homework solution will be posted shortly, I think, for the last homework that was graded. Anything else? All right, good luck as you study and good luck on the exam. |
MIT_1034_Numerical_Methods_Applied_to_Chemical_Engineering_Fall_2015 | 8_QuasiNewtonRaphson_Methods.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. JAMES SWAN: OK. Should we begin? Let me remind you, we switched topics. We transitioned from linear algebra, solving systems of linear equations, to solving systems of nonlinear equations. And it turns out, linear algebra is at the core of the way that we're going to solve these equations. We need iterative approaches. These problems are complicated. We don't know how many solutions there could be. We have no idea where those solutions could be located. We have no exact ways of finding them. We use iterative methods to transform non-linear equations into simpler problems, right? Iterates of systems of linear equations. And the key to that was the Newton-Raphson method. So I'm going to pick up where we left off with the Newton-Raphson method, and we're going find out ways of being less Newton-Raphson-y in order to overcome some difficulties with the method, shortcomings of the method. There are a number of them that have to be overcome in various ways. And you sort of choose these so-called quasi- Newton-Raphson methods as you need them. OK, so you'll find out. You try to solve a problem. And the Newton-Raphson method presents some difficulty, you might resort to a quasi Newton-Raphson method instead. Built into MATLAB is non-linear equations solver. fsolve. OK, it's going to happily solve systems of nonlinear equations for you, and it's going to use this methodology to do it. It's going to use various aspects of these quasi- Newton-Raphson methods to do it. I'll sort of point out places where fsolve will take from our lecture and implement them for you. It will even use some more complicated methods that we'll talk about later on in the context of optimization . Somebody asked an interesting question, which is how many of these nonlinear equations am I going to want to solve at once? Right? Like I have a system of these equations. What does a big system of nonlinear equations look like? And just like with linear equations, it's as big as you can imagine. So one case you could think about is trying to solve, for example, the steady Navia-Stokes equations. That's a nonlinear partial differential equation for the velocity field and the pressure in a fluid. And a at Reynolds number, that non-linearity is going to present itself in terms of inertial terms that may even dominate the flow characteristics in many places. We'll learn ways of discretizing partial differential equations like that. And so then, at each point in the fluid we're interested in, we're going to have a non-linear equation that we have to solve. So there's going to be a system of these non-linear equations that are coupled together. How many points are there going to be? That's up to you, OK? And so you're going to need methods like this to solve that. It sounds very complicated. So a lot of times in fluid mechanics, we have better ways of going about doing it. But in principle, we've have any number of nonlinear equations that we want to solve. We discussed last time, the new Newton-Raphson method, which was based around the idea of linearization. We have these nonlinear equations. We don't know what to do with them. So let's linearize them, right? If we have some guess for the solution, which isn't perfect, but it's our best possible guess. Let's look at the function and find a linearized form of the function and see where that linearized form has an intercept. And we just have an Ansatz. We guess that this is a better solution than the one we had before. And we iterate. It turns out you can prove that this sort of a strategy-- this Newton-Raphson strategy is locally convergent. If I start with a guess sufficiently close to the root, you can prove mathematically that this procedure will terminate with a solution at the root, right? It's going to approach after an infinite number of iterates, the root. That's wonderful. It's locally convergent, not globally convergent. So this is one of those problems that we discussed. Take a second here, right? Here's your Newton-Raphson formula. You've got it on your slides. Take a second here and-- this is sort of interesting. Derive the Babylonian method, right? Turns out the Babylonians didn't know anything about Newton-Raphson but they had some good guesses for how to find square roots, right? Find the roots of an equation like this. See that you understand the new Newton-Raphson method by deriving the Babylonian method, right? The iterative method for finding the square root of s as the root of this equation. Can you do it? [SIDE CONVERSATION] JAMES SWAN: Yes, you know how to do this, right? So calculate the derivative. The derivative is 2x. Here's our formula for the iterative method. Right? So it's f of x over f prime of x. That sets the magnitude of the stop. The direction is minus this magnitude. It's in one d, so we either go left or we go right. Minus sets the direction. We add that to our previous guess. And we have our new iterate, right? You substitute f and f prime, and you can simplify this down to the Babylonian method, which said take the average of x and s over x. If I'm at the root, both of these should be square root of s. And this quantity should be zero exactly, right? And you'll get your solution. So that's the Babylonian method, right? It's just an extension of the Newton-Raphson method. It was pretty good back in the day, right? Quadratic convergence to the square root of a number. I mentioned early on computers got really good in computing square roots at one point, because somebody did something kind of magic. They came up with a scheme for getting good initial guesses for the square root. This iterative method has to start with some initial guess. If it starts far away, it'll take more iterations to get there. It'll get there, but it's going to take more iterations to get there. That's undesirable if you're trying to do fast calculations. So somebody came up with some magic scheme, using floating point mathematics, right? They masked some of the bits in the digits of these numbers. A special number to mask those bits. They found that using optimization, it turns out. And they got really good initial guesses, and then it would take one or two iterations with the Newton-Raphson method to get 16 digits of accuracy. That's pretty good. But good initial guesses are important. We'll talk about that next week on Wednesday. Where do those good initial guesses come from? But sometimes we don't have those available to us. So what are some other ways that we can improve the Newton-Raphson method? That will be the topic of today's lecture. What's the Newton-Raphson method look like graphically in many dimensions. We talked about this Jacobian. Right, when we're trying to find the roots of a non-linear equation where our function has more than one dimension-- let's say it has two dimensions. So we have an f 1 and an f 2. And our unknowns our x 1 and x 2, they live in the x1 x2 plane, right? f1 might be this say bowl-shaped function. I've sketched out in red, right? It's three dimensional. It's some surface here. Right? We have some initial guess for the solution. We go up to the function, and we find a linearization of it, which is not a line but a plane. And that plane intersects the x 1, x 2 plane at a line. And our next best guess is going to live somewhere on this line. Where on this line depends on the linearization of f 2. Right? So we got to draw the same picture for f 2, but I'm not going to do that for you. So let's say, this is where the equivalent line from f 2 intersects the line from f 1, right? So the two linearizations intersect here. That's our next best guess. We go back up to the curve. We find the plane that's tangent to the curve. We figure out where it intersects. The x 1 x 2 plane. That's a line. We find the point on the line that's our next best guess, and continue. Finding that intersection in the plane is the act of computing Jacobian inverse times f. OK? If we project down to just the x 1 x 2 plane, and we draw the curves where f 1 equals 0, and f 2 equals zero, right? Then each of these iterates, we start with an initial guess. We find the planes that are tangent to these curves, or to these surfaces. And where they intersect the x 1 x 2 plane. Those give us these lines. And the intersection of the lines give us our next approximation. And so our function steps along in the x1 and x2 plane. It takes some path through that plane. And eventually it will approach this locally unique solution. So that's what this iterative method is doing, right? It's navigating this multidimensional space, right? It moves where it has to to satisfy these linearized equations, right? Producing ever better approximations for a root. Start close. It'll converge fast. How fast? Quadratically. And you can prove this. I'll prove it in 1D. You might think about the multidimensional case, but I'll show you in one dimension. So the Newton-Raphson method said, xi plus 1 is equal to xi minus f of xi over f prime of xi. I'm going to subtract the root, the exact root from both sides of this equation. So this is the absolute error in the i plus 1 approximation. It's equal to this. And we're going to do a little trick, OK? The value of the function at the root is exactly equal to zero, and I'm going to expand this as a Taylor series, about the point xy. So f of xi plus f prime of xi times x star minus xi, plus this second order term as well. Plus cubic terms in this Taylor expansion, right? All of those need to sum up and be equal to 0, Because f of x star by definition is zero. x star is the root. And buried in this expression here is a quantity which can be related to xi minus f of xi over f prime minus x star. It's right here, right? xi minus x star, xi minus x star. I've got to divide through by f prime. Divide through by f prime, and I get f over f prime. That's this guy here. Those things are equal in magnitude then, to this second order term here. So they are equal in magnitude to 1/2, the second derivative of f, divided by f prime, times xi minus x star squared. And then these cubic terms, well, they're still around. But they're going to be small as I get close to the actual root. So they're negligible, right? Compared to these second order terms, they can be neglected. And you should convince yourself that I can apply some of the norm properties that we used before, OK? To the absolute value. The absolute values is the norm of a scalar. So these norm properties tell me that this quantity has to be less than or equal to, right? This ratio of derivatives multiplied by the absolute error in step i squared. And I'll divide by that absolute error in step i squared. So taking the limit is i goes to infinity, this ratio here is bound by a constant. This is a definition for the rate of convergence. It says I take the absolute error in step i plus 1. I divide it by the absolute error in step i squared. And it will always be smaller than some constant, as i goes to infinity. So it converges quadratically, right? If the relative error in step i was order 10 to the minus 1, then the relative error in step i plus 1 will be order 10 to the minus 2. Because they got to be bound by this constant. If the relative error in step i was 10 to the minus 2, the relative error in step i plus 1, has got to be order 10 to the minus 4, or smaller, right? Because I square the quantity down here. I get to double the number of accurate digits with each iteration. And this will hold so long as the derivative evaluated at the root is not equal to zero. If the derivative evaluated at the root is equal to zero, this analysis wasn't really valid. You can't divide by zero in various places, OK? It turns out the same thing is true if we do the multidimensional case. I'll leave it to you to investigate that case. I think it's interesting for you to try and explore that. It follows the 1D model I showed you before. But the absolute error in iterate i plus 1, divided by the absolute error in iterate i-- here's a small typo here. Cross out that plus 1, right? The absolute error in iterate i squared is going to be bound by a constant. And this will be true so long as the determinant at the Jacobian at the root is not equal to zero. We know the determinant of the Jacobian plays the role of the derivative in the 1D case. When the Jacobian is singular, you can show that linear convergence is going to occur instead. So it will still converge. It's not necessarily a problem that the Jacobian becomes singular at the root. But you're going to lose your rate of quadratic convergence. And this rate of convergence is only guaranteed if we start sufficiently close to the root. So good initial guesses, that's important. We have a locally convergent method. Bad initial guesses? Well, who knows where this iterative method is going to go. There's nothing to guarantee that it's going to converge even. Right It may run away someplace. Here are a few examples of where things can go wrong. So if I have a local minima or maxima, I might have an iterate where I evaluate the linearization, and it tells me my next best approximation is on the other side of this minima or maxima. And then I go up, and I get the linearization here. And it tells me, oh, my next best approximation is on the other side. And this method could bounce back and forth in here for as long as we sit and wait. It's locally convergent, not globally convergent. It can get hung up in situations like this. Asymptotes are a problem. I have an asymptote, which presumably has an effective root somewhere out here at infinity. Well, my solution would like to follow the linearization, the successive linearizations all the way out along this asymptote, right? So my iterates may blow up in an uncontrolled fashion. You can also end up with funny cases where our Newton-Raphson steps continually overshoot the roots. So they can be functions who have a power loss scaling right near the root, such that the derivative doesn't exist. OK? So here the derivative of this thing, if s is smaller than 1, and x equals zero, it won't exist, right? There isn't a derivative that's defined there. And in those cases, you can often wind up with overshoot. So I'll take a linearization, and I'll shoot over the root. And I'll go up and I'll take my next linearization, I'll shoot back on the other side of the root. And depending on the power of s associated with this function, it may diverge, right? I may get further and further away from the root, or it may slowly converge towards that root. But it can be problematic. Here's another problem that crops up. Sometimes people talk about basins of attraction. So here's a two-dimensional, non-linear equation I want to find the roots for. It's cubic in nature, so it's got three roots, which are indicated by the stars in the x1 x 2 plane. And I've taken a number of different initial guesses from all over the plane and I've asked-- given that initial guess, using the Newton-Raphson method, which root do I find? So if you see a dark blue color like this, that means initial guesses there found this root. If you see a medium blue color, that means they found this root. See a light blue color, that means they found this root. And this is a relatively simple function, relatively low dimension, but the plane here is tilled by-- it's not tiled. It's filled with a fractal. These basins of attraction are fractal in nature. Which means that I could think that I'm starting with a solution rate here that should converge to this green root because it's close. But it actually goes over here. And if I change that initial guess by a little bit, it actually pops up to this root over here instead. It's quite difficult to predict which solution you're going to converge to. Yes? AUDIENCE: And in this case, you knew how many roots there are. JAMES SWAN: Yes. AUDIENCE: Often you wouldn't know. So you find one, and you're happy. Right? You're happy because [INAUDIBLE] physical. Might be the wrong one. JAMES SWAN: So this the problem. I think this is about the minimum level of complexity you need. Which is not very complex at all in a function to get these sorts of basins of attraction. Polynomial equations are ones that really suffer from this especially, but it's a problem in general. You often don't know. I'll show you quasi Newton-Raphson methods that help fix some of these problems. How about other problems? It's good to know where the weaknesses are. Newton-Raphson sounds great, but where are the weaknesses? Let's see. The Jacobian-- might not be easy to calculate analytically, right? So far we've written down analytical forms for the Jacobian. We've had simple functions. But maybe it's not easy to calculate analytically. You should think about what are the sources for this function, f of x, that we're trying to find the roots for. Also we got to invert the Jacobian, and we know that's a matrix. And matrices which have a lot of dimensions in them are complicated to invert. There's a huge amount of complexity, computational complexity, in doing those inversions. It can take a long time to do them. It may undesirable to have to constantly be solving a system of linear equations. So might think about some options for mitigating this. Sometimes it won't converge at all? Or to the nearest root. This is this overshoot, or basin of attraction problem. And we'll talk about these modifications to correct these issues. They come with a penalty though. OK? So Newton-Raphson was based around the idea of linearization. If we modify that linearization, we're going to lose some of these great benefits of the Newton-Raphson method, namely that it's quadratically convergent, right? We're going to make some changes to the method, and it's not going to converge quadratically anymore. It's going to slow down, but maybe we'll be able to rein in the method and make it converge either to the roots we want it to converge to or converge more reliably than it would before. Maybe we'll be able to actually do the calculation faster, even though it may require more iterations. Maybe we can make each iteration much faster using some of these methods. OK so here are the three things that we're going to talk about. We're going to talk about approximating the Jacobian with finite differences. We're talking about Broyden's method for approximating the inverse of the Jacobian. And we're going to talk about something called damped Newton-Raphson methods. Those will be the three topics of the day. So here's what I said before. Analytical calculations of Jacobian requires analytical formulas for f. And for functions of a few dimensions, right? These calculations are not too tough. For functions of many dimensions, this is tedious at best. Error prone, at worst. Think about even something like 10 equations for 10 unknowns. If your error rate is 1%, well, you're shot. There's a pretty good chance that you missed one element of the Jacobian. You made a mistake somewhere in there. And now you're not doing Newton-Raphson You're doing some other iterative method that isn't the one that you intended. There are a lot of times where you-- maybe you have an analytical formula for some of these f's, but not all of them. So where can these functionalities come from? We've seen some cases, where you have physical models. Thermodynamic models that you can write down by hand. But where are other places that these functions come from? Ideas? AUDIENCE: [INAUDIBLE] JAMES SWAN: Oh, good. AUDIENCE: [INAUDIBLE] JAMES SWAN: Beautiful. So this is going to be the most common case, right? Maybe you want to use some sort of simulation code, right? To model something. It's somebody else's simulation code. They're an expert at doing finite element modeling. But the output is this f that you're interested, and the input to the simulation are these x's. And you want to find the roots associated with this problem that you're solving via the simulation code, right? This is pretty important being able to connect different pieces of software together. Well, there's no analytical formula for f there. OK? You're shot. So it may come from results of simulations. This is extremely common. It could come from interpretation of data. So you may have a bunch of data that's being generated by some physical measurement or a process, either continuously or you just have a data set that's available to you. But these function values are often not, they're not things that you know analytically. It may also be the case that, oh, man, even Aspen, you're going to wind up solving systems of nonlinear equations. It's going to use the Newton-Raphson method. Aspen's going to have lots of these formulas in it for functions. Whose going in by hand and computing the derivatives of all these functions for aspen? MATLAB has a nonlinear equation solver in it. You give it the function, and it'll find the root of the equation, given a guess. It's going to use the Newton-Raphson method. Whose computing the Jacobian for MATLAB? You can. You can compute it by hand, and give it an input. Sometimes that's a really good thing to do. But sometimes, we don't have that available to us. So we need alternative ways of computing the Jacobian. The simplest one is a finite difference approximation. So you recall the definition of the derivative. It's the limit of this difference, f of x plus epsilon minus f of x divided by epsilon, as epsilon goes to zero. There's an error in this approximation for the derivative with a finite value for epsilon, which is proportional to epsilon. So choose a small value of epsilon. You'll get a good approximation for the derivative. It turns out the accuracy depends on epsilon, but kind of in a non-intuitive way. And here's a simple example. So let's compute the derivative of f of x equals e the x, which is e to the x. Let's evaluate it at x equals 1. So F prime of 1 is e the 1, which should approximately be e to the 1 plus epsilon minus e to the 1 over epsilon. And here I've done this calculation. And I've asked, what's the absolute error in this calculation by taking the difference between this and this, for different values of epsilon. You can see initially, as epsilon gets smaller, the absolute error goes down in proportion to epsilon. 10 to the minus 3, 10 to the minus 3. 10 to the minus 4, 10 to the minus 4. 10 to the minus 8, 10 to the minus 8. 10 to the minus 9. 10 to the minus 7. 10 to the minus 10. And 10 to the minus 6. So it went down, and it came back up. But that's not what this formula told us should happen, right? Yes? AUDIENCE: So just to be sure. That term in that column on the right? JAMES SWAN: Yes? AUDIENCE: It says exponential 1, but it represents the approximation? JAMES SWAN: Exponential 1 is exponential 1. f prime of 1 is our approximation here. AUDIENCE: Oh, OK. JAMES SWAN: Sorry that that's unclear. Yes, so this is the absolute error in this approximation. So it goes down, and then it goes up. Is that clear now? Good. OK, why does it go down? It goes down because our definition of the derivative says it should go down. At some point, I've actually got to do these calculations with high enough accuracy to be able to perceive the difference between e to the 1 plus, 10 to the minus 9, and e to the 1. So there is a truncation error in the calculation of this difference that reduces my accuracy at a certain level. There's a heuristic you can use here, OK? You want to set this epsilon when you do this finite difference approximation, to be the square root of the machine precision times the magnitude of x, the point at which you're trying to calculate this derivative. So that's, usually we're double precision. So this is something like 10 to the minus 8 times the magnitude of x. That's pretty good. That holds true here. OK? You can test it out on some other functions. If x is 0, or very small. We don't want a relative tolerance. We've got to choose an absolute tolerance instead. Just like we talked about with the step norm criteria. So one has to be a little bit careful in how you implement this. But this is a good guide, OK? A good way to think about how the error is going to go down, and where it's going to start to come back up. Make sense? Good. OK, so how do you compute elements of the Jacobian then? Well, those are all just partial derivatives of the function with respect to one of the unknown variables. So partial f i with respect to x j is just f i at x plus some epsilon deviation of x in its j-th component only. So this is like a unit vector in the j direction, or associated with the j-th element of this vector. Minus f i of x divided by this epsilon. Equivalently, you'd have to do this for f i. You can compute all the columns of the Jacobian very quickly by calling f of x plus epsilon minus f of x over epsilon. Just evaluate your vector-valued function at these different x's. Take the difference, and that will give you column j of your Jacobian. So how many function evaluations does it take to calculate the Jacobian at a single point? How many times do I have to evaluate my function? Yeah? AUDIENCE: 2 n. JAMES SWAN: 2n, right. So if I have n, if I have n elements to x, I've got to make two function calls per column of j. There's going to be n columns in j. So 2n function evaluations to compute the Jacobian at a single point. Is that really true though? Not quite. f of x is f of x. I don't have to compute it every time. I just compute f of x once. So it's really like n plus 1 that I have to do, right? N plus a function evaluations to compute this thing. I actually got to compute them though. Function evaluations may be really expensive. Suppose you're doing some sort of complicated simulation, like a finite element simulation. Maybe it takes minutes to generate a function evaluation. So it can be expensive to compute the Jacobian in this way. Just be expensive to compute the Jacobian. How is approximation of Jacobian going to affect the convergence? What's going to happen to the rate of convergence of our method? It's going to go down, right? It's probably not going to be linear. It's not going to be quadratic. It's going to be some super linear factor. It's going to depend on how accurate the Jacobian is. How sensitive the function is near the root. But it's going to reduce the accuracy of the method, or the convergence rate of the method by a little bit. That's OK. So this is what MATLAB does. It uses a finite difference approximation for your Jacobian. When you give it a function, and you don't tell it the Jacobian explicitly. Here's an example of how to implement this yourself. So I've got to have some function that does whatever this function is supposed to do. It takes as input x and it gives an output f. And then the Jacobian, right? It's a matrix. So we initialize this matrix. We loop over each of the columns. We compute the displacement right? The deviation from x for each of these. And then we compute this difference and divide it by epsilon. I haven't done everything perfect here, right? Here's an extra function evaluation. I could just calculate the value of the function at x before doing the loop. I've also only used a relative tolerance here. I'm going to be in trouble if xi is 0. It's going to be a problem with this algorithm. These are the little details one has to pay attention to. But it's a simple enough calculation to do. Loop over the columns, right? Compute these differences. Divide by epsilon. You have your approximation for the Jacobian. I've got to do that at every iteration, right? Every time x is updated, I've got to recompute my Jacobian. That's it though. All right, that's one way of approximating a Jacobian. There s a method that's used in one dimension called the Secant method. It's a special case of the Newton-Raphson method and uses a coarser approximation for the derivative. It says, I was taking these steps from xi minus 1 to xi. And I knew the function values there. Maybe I should just compute the slope of the line that goes through those points, and say, that's my approximation for the derivative. Why not? I have the data available to me. It seems like a sensible thing to do. So we replace f prime at x1 with f of xi minus f of x i minus 1. Down here we put xi minus xi minus 1 up here. That's our approximation for the derivative, or the inverse of the derivative. This can work, it can work just fine. Can it be extended to many dimensions? That's an interesting question, though? This is simple. In many dimensions, not so obvious right? If I know xi, xi minus 1. f of xi, f of si minus 1. Can I approximate the Jacobian? What do you think? Does it strike you as though there might be some fundamental difficulty to doing that? Yeah? AUDIENCE: Could you approximate the gradient? [INAUDIBLE] gradient of f at x. JAMES SWAN: OK. AUDIENCE: But I'm sure if you whether you can go backwards from the gradient in the Jacobian. JAMES SWAN: OK. So, let's-- go ahead. AUDIENCE: Perhaps the difficulty is, I mean when they're just single values-- JAMES SWAN: Yeah. AUDIENCE: You can think of [INAUDIBLE] derivative, right? JAMES SWAN: Yeah. AUDIENCE: [INAUDIBLE] get really big, you get a vector of a function at xi, a vector of a function of xi minus 1 or whatever. Vectors of these x's. And so if you're [INAUDIBLE] JAMES SWAN: Yeah, so how do I divide these things? That's a good question. The Jacobian-- how much information content is in the Jacobian? Or how many independent quantities are built into the Jacobian? AUDIENCE: [INAUDIBLE] JAMES SWAN: And squared. And how much data do I have to work with here? You know, order n data. To figure out order n squared quantities. This is the division problem you're describing, right? So it seems like this is an underdetermined sort of problem. And it is. OK? So there isn't a direct analog to the Secant method in dimensions. We can write down something that makes sense. So this is the 1D Secant approximation. That the value of the derivative multiplied by the step between i minus 1 and i is approximated by the difference in the values of the function. The equivalent is the value of the Jacobian multiplied by the step between i minus 1 and i is equal to the difference between the values of the functions. But now this is an equation for n squared elements of the Jacobian, in terms of n elements of the function, right? So it's massively, massively underdetermined. OK? Here we have an equation for-- we have one equation for one unknown. The derivative, right? Think about how it was moving through space before, right? The difference here, xi minus 1, that's some sort of linear path that I'm moving along through space. How am I supposed to figure out what the tangent curves to all these functions are from this linear path through multidimensional space, right? That's not going to work. So there's underdetermined problems. It's not so-- that's not so bad, actually. Right? Doesn't mean there's no solution. In fact, it means there's all a lot of solutions. So we can pick whichever one we think is suitable. And Broyden's method is a method for picking one of these potential solutions to this underdetermined problem. We don't have enough information to calculate the Jacobian exactly. But maybe we can construct a suitable approximation for it. And here's what's done. So here's the Secant approximation. It says the Jacobian times the step size, or the Newton-Raphson step, should be the difference in the functions. And Newton's method for x i, said xi minus xi minus 1 was equal-- times the Jacobian, was equal to minus f of xi minus 1. This is just Newton's method. Invert the Jacobian, and put it on the other side of the equation. Broyden's method said, i-- there's a trick here. Take the difference between these things. I get the same left-hand side on both of these equations. So take the difference, and I can figure out how the Jacobian should change from one step to the next. So maybe I have a good approximation to the Jacobian at xi minus i, I might be able to use this still underdetermined problem to figure out how to update that Jacobian, right? So Broyden's method is what's referred to as the rank one update. You should convince yourself that letting the Jacobian at xi minus the Jacobian at xi minus 1 be equal to this is one possible solution of this underdetermined equation. There are others. This is one possible solution. It turns out to be a good one to choose. So there's an iterative approximation now for the Jacobian. Does this strategy make sense? It's a little weird, right? There's something tricky here. You got to know to do this. Right, so somebody has to have in mind already that they're looking for differences in the Jacobian that they're going to update over time. So this tells me the Jacobian, how the Jacobian is updated. Really we need the Jacobian inverse, and the reason for choosing this rank one update approximation is it's possible to write the inverse of j of xi in terms of the inverse of j at xi minus 1 when this update formula is true. So it's something called the Sherman Morrison Formula, which says the inverse of a matrix plus the dyadic product of two vectors can be written in this form. We don't need to derive this, but this is true. This matrix plus dyadic product is exactly this. We have dyadic product between f and the step from xi minus 1 to x. And so we can apply that Sherman Morrison Formula to the rank one update. And not only can we update the Jacobian iteratively, but we can update the Jacobian inverse. So if I know j inverse at some previous time, I know j inverse at some later time too. I don't have to compute these things. I don't have to solve these systems of equations, right? I just update this matrix. Update this matrix, and I can very rapidly do these computations. So not only do we have an iterative formula for the steps, right? From x 0 to x1 to x 2, all the way up to our converged solution, but we can have a formula for the inverse of the Jacobian. We give up accuracy. But that's paid for in terms of the amount of time we have to spend doing these calculations. Does it pay off? It depends on the problem, right? We try to solve problems in different ways. This is a pretty common way to approximate the Jacobian. Questions about this? No. OK. Broyden's method. All right, here's the last one. The Damped Newton-Raphson method. We'll do this in one dimension. So the Newton-Raphson method, Newton and Raphson told us, take a step from xi to xi plus 1 that is this big. xi to xi plus 1, it's this big. Sometimes you'll take that step, and you'll find that the value of the function at xi plus 1 is even bigger than the value of the function at xi. There was nothing about the Newton-Raphson method that told us the function value was always going to be decreasing. But actually, our goal is to make the function value go to 0 in absolute value. So it seems like this step, not a very good one, right? What are Newton and Raphson thinking here. This is not a good idea. The function value went up. Far from a root, OK? The Newton-Raphson method is going to give these sorts of erratic responses. Who knows what direction it's going to go? And it's only locally convergent. It tells us a direction to move in, but it doesn't always give the right sort of magnitude associated with that step. And so you take these steps and you can find out the value of your function, the normed value of your functions. It's bigger than where you started. It seems like you're getting further away from the root. Our ultimate goal is to drive this norm to 0. So steps like that you might even call unacceptable. Right? Why would I ever take a step in that direction? Maybe I should use a different method. When I take a step that's so big my function value grows in norm value. So what one does, oftentimes, is introduce a damping factor, right? We said that this ratio, or equivalently, the Jacobian inverse times the value of the function, gives us the right direction to step in. But how big a step should we take? It's clear a step like this is a good one. It reduced the value of the function. And it's better than the one we took before, which was given by the linear approximation. So if I draw the tangent line, it intercepts here. If I take a step in this direction, but I reduce the slope by having some damping factor that's smaller than 1, I get closer to the root. Ideally we'd like to choose that damping factor to be the one that minimizes the value of the function at xi plus 1. So it's the argument that minimizes the value of the function at xi plus 1 or at xi minus alpha, f over f prime. Solving that optimization problem, what's hard as finding the root itself. So ideally this is true. But practically you're not going to be able to do it. So we have to come up with some approximate methods of solving this optimization problem. Actually we don't even care about getting it exact. We know Newton-Raphson does a pretty good job. We want some sort of guess that's respectable for this alpha so that we get close to this root. Once we get close, we'll probably choose alpha equal to 1. We'll just take the Newton-Raphson steps all the way down to the root. So here it is in many dimensions. Modify the Newton-Raphson step by some value alpha, choose alpha to be the argument that minimizes the norm value of the function at xi plus 1. Here's one way of doing this. So this is called the Armijo line search. See? Line search. Start by letting alpha equal to 1. Take the full Newton-Raphson step, and check. Was the value of my function smaller than where I started? If it is, let's take the step. It's getting us-- we're accomplishing our goal. We re reducing the value of the function in norm. Maybe we're headed towards z. That's good. Accept it. If no, let's replace alpha with alpha over 2. Let's take a shorter step. We take a shorter step, and we repeat. Right? Take the shorter step. Check whether the value of the function with the shorter step is acceptable. If yes, let's take it, and let's move on. And if no, replace alpha with alpha over 2, and continue. So we have our step size every time. We don't just have to have it. We could choose different factors to reduce it by. But we try to take shorter and shorter steps until we accomplish our goal of having a function which is smaller in norm at our next iterate than where we were before. It's got-- the function value will be reduced. The Newton-Raphson method picks a direction that wants to bring the function value closer to 0. We linearize the function, and we found the direction we needed to go to make that linearization go to 0. So there is a step size for which the function value will be reduced. And because of that, this Armijo line search of the Damped Newton-Raphson method is actually globally convergent, right? The iterative method will terminate. You can guarantee it. Here's what it looks like graphically. I take my big step, my alpha equals 1 step. I check the value of the function. It's bigger in absolute value than where I started. So I go back. I take half that step size. OK? I look at the value of the function. It's still bigger. Let's reject it, and go back. I take half that step size again. The value of the function here is now smaller in absolute value. So I accept it. And I put myself pretty close to the root. So it's convergent, globally convergent. That's nice. It's not globally convergent to roots, which is a pain. But it's globally convergent. It will terminate eventually. You'll get to a point where you won't be able to advance your steps any further. It may converge to minima or maxima of a function. Or it may converge to roots. But it will converge. I showed you this example before with basins of attraction. So here we have different basins of attraction. They're all colored in. They show you which roots you approach. Here I've applied the Damped Newton-Raphson method to the same system of equations. And you can see the basins of attraction are shrunk because of the damping. What happens when you're very close to places where the determinant of the Jacobian is singular, you take all sorts of wild steps. You go to places where the value of the function is bigger than where you started. And then you've got to step down from there to try to find the root. Who knows where those locations are? It's a very complicated, geometrically complicated space that you're moving through. And the Damped Newton-Raphson method is forcing the steps to always reduce the value of the function, so they reduce the size of these basins of attraction. So this is often a nice way to supplement the Newton-Raphson method when your guesses aren't very good to begin with. When you start to get close to root you're, always just going to accept alpha equals 1. The first step will be the best step, and then you'll converge very rapidly to the solution. Do we have to do any extra work actually to do this Damped Newton-Raphson method. Does it require extra calculations? What do you think? A lot of extra-- a lot of extra calculation? How many extra calculations does it require? Of course it requires extra. How many? AUDIENCE: [INAUDIBLE] JAMES SWAN: What do you think? AUDIENCE: [INAUDIBLE] JAMES SWAN: It's-- that much is true. So let's talk about taking one step. How many more-- how many more calculations do I have to pay to do this sort of a step? Or even write the multidimensional step? For each of these times around this loop, do I have to recompute? Do I have to solve the system of equations? No. Right? You precompute this, right? This is the basic Newton-Raphson step. You compute that first. You've got to do it once. And then it's pretty cheap after that. I've got to do some extra function evaluations, but I don't actually have to solve the system of equations. Remember this is order n cubed. If we solve it exactly, maybe order n squared or order n if we do it iteratively. And the Jacobian is sparse somehow, and we know about it sparsity pattern. This is expensive. Function evaluations, those are order n to compute. Relatively cheap by comparison. So you compute your initial step. That's expensive. But all of this down here is pretty cheap. Yeah? AUDIENCE: You're also assuming that your function evaluations are reasonably true. JAMES SWAN: This is true. AUDIENCE: [INAUDIBLE] JAMES SWAN: It's true. Well, Jacobian is also very expensive to compute then too. So, if-- AUDIENCE: [INAUDIBLE] JAMES SWAN: Sure, sure, sure. No, I don't disagree. I think one has to pick the method you're going to use to suit the problem. But turns out this doesn't involve much extra calculation. So by default, for example, fsolve in MATLAB is going to do this for you. Or some version of this. it's going to try to take steps that aren't too big. It will limit the step size for you, so that it keeps the value of the function reducing in magnitude. It's a pretty good general strategy. Yes? AUDIENCE: [INAUDIBLE] so why do we just pick one value for [INAUDIBLE] JAMES SWAN: I see. So why-- ask that one more time. This is a good question. Can you say it a little louder so everyone can hear? AUDIENCE: So why, instead of having just one value of alpha and not having several values of alpha [INAUDIBLE] JAMES SWAN: I see. So the question is, yeah, we used a scalar alpha here, right? If we wanted to, we could reduce the step size and also change direction. We would use a matrix to do that, instead, right? It would transform the step and change its direction. And maybe we would choose different alphas along different directions, for example. So diagonal matrix with different alphas. We could potentially do that. we're probably going to need some extra information to decide how to set the scaling in different directions. One thing we know for sure is that the Newton-Raphson step will reduce the value of the function. If we take a small enough step size, it will bring the value of the function down. We know that because we did the Taylor Expansion of the function to determine that step size. And that Taylor expansion was going to be-- that Taylor expansion is nearly exact in the limit of very, very small step sizes. So there will always be some small step in this direction, which will reduce the value of the function. In other directions, we may reduce the value of the function faster. We don't know which directions to choose, OK? Actually I shouldn't say that. When we take very small step sizes in this direction, it's reducing the value of the function fastest. There isn't a faster direction to go in. When we take impossibly small, vanishingly small step sizes. But in principle, if I had some extra information on the problem, I might be able to choose step sizes along different directions. I may know that one of these directions is more ill-behaved than the other ones. And choose a different damping factor for it. That's a possibility. But we actually have to know something about the details of the problem we're trying to solve if we're going to do-- it's a wonderful question. I mean, you could think about ways of making this more, potentially more robust. I'll show you an alternative way of doing this when we talk about optimization. In optimization we'll do-- we'll solve systems of nonlinear equations to solve these optimization problems. There's another way of doing the same sort of strategy that's more along what you're describing. Maybe there's a different direction to choose instead that could be preferable. This is something called the dogleg method. Great question. Anything else? No. So globally convergent, right? Converges to roots, local minima or maxima. There are other modifications that are possible. We'll talk about them in optimization. There's always a penalty to doing this. The penalty is in the rate of convergence. So it will converge more slowly. But maybe you speed the calculations along anyways, right? Maybe it requires fewer iterations overall to get there because you tame the locally convergent properties of the Newton-Raphson method. Or you shortcut some of the expensive calculations, like getting your Jacobian or calculating your Jacobian inverse. All right? So Monday we're going to review sort of topics up til now. Professor Green will run the lecture on Monday. And then after that, we'll pick up with optimization, which will follow right on from what we've done so far. Thanks. |
MIT_1034_Numerical_Methods_Applied_to_Chemical_Engineering_Fall_2015 | 30_Models_vs_Data_3.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. WILLIAM GREEN: So welcome to our special class with half the people gone for tonight's hearing. You guys are the chosen few, so I'm pleased you're here. AUDIENCE: [INTERPOSING VOICES] WILLIAM GREEN: So I'm just going to say a little bit about models versus data and then I'm going to go back to undervalue problems and talk a little bit about how you handle really big ones. Yeah? AUDIENCE: Are these going to be [INAUDIBLE].. WILLIAM GREEN: Yeah. Yes, sir. AUDIENCE: All right. WILLIAM GREEN: And in fact, these are even going to be on the video, because the guy just this fixed the video, so it will be recorded. All right. Yeah, I'm sorry. You said the video was broken last time so that recording didn't work. All right. So when we're talking about models versus data a really critical question is whether the model is really consistent with the data or not. And how do we know this is really true? So we talked about how we can figure out some sort of chi-squared maximum that we would tolerate. So the maximum that we tolerate, we call it Q, capital Q in a lecture before. And so if we can find a chi-squared of theta that's less than this, then it's at least plausible that our model is true. Right? Or, alternatively, if all chi-squared thetas for any value of thetas are greater than chi-squared max, then we would say the model is false, or something else is really wrong with our experiment. Right? That's what we say when we're saying we have a maximum chi-squared that we would tolerate. So a critical question with us is whether we really found the very best fit value of the parameters. In some simple problems you might have linear dependence on the parameters, and then there's only one best fit usually, as Long as you have a non-singular set of equations and everything we find there's no trouble. However, more commonly you have equations which are singular or close to singular, and also you normally have non-linear dependence between the model predictions and the parameters. So, for example, I do kinetics. We always care about the ray coefficients, the K's and they always come in inside a differential equation. Right? You always have like d concentrations dt is equal to some terms, and some of them are, you know, k concentration I times concentration j. whatever one it is, ij. Right? You have a lot of like terms like this by molecular reaction terms. And these K's, some of them I don't know. And so these would be one of my thetas. So this would be, you know, theta 17 would be one of these K's that I want to determine. And when I have a differential equation like this, when I integrate it, the K is always going to come in non-linearly in the solution. Right? So when I integrate this solution, even in the simplest case you get things like c of t is equal to ae to the negative kt. That might be a really super simple case if you have a linear kinetics. The K is still coming in non-linearly to the predictions. Right? And if you have anything complicated you can't even write it down in a closed form expression, and you have to integrate with oed45 or ode15s and so the relationship between K and the concentrations that you measure can be really complicated. So it's always non-linear. In my world all the models are non-linear. And I think there's many other situations like this. So if you try to measure a diffusivity, it might be possible to set up an experiment where you'd expect a linear dependency to something you'd measure in a diffusivity, but most likely not. So most likely it will come in non-linearly somehow and you'd have some model for it, and you'd have a non-linear dependence between that number and the numbers you would measure. Right? And almost all the properties are like that. So because you have a non-linear dependence, then you have to worry about how many minima do you have in your surface? So chi-squared of theta is a non-linear function of a lot of variables. And typically it won't just have one minimum, you'll have a whole lot of local minima. Now one of those is the global minimum. And that's the one you want, that's the very best possible fit. But you normally won't know how many minima it has and so no matter how many minima you find you don't know if you're done, and you don't know if you really found the best fit. OK? So just because all the ones you found so far bigger than chi-squared max you might not be ready to write that article in Nature to claim that the laws of physics are incorrect because you're not sure you really found the very best fit. OK? So that's one of the many problems of trying to figure out if a model of the data are really consistent. Now that particular one, you can overcome if you can find a guaranteed way to find the global optimum, to find the very best possible that there is. And some people spend their life working on this, and one of those people is professor Barton in this department. And so he teaches a class on how to do this, and there are actually methods to do it. So if you want to see the application of this to kinetic models there's a paper by me and professor Barton. And the lead author is one of his former students, Adam Singer and J. Phys. Chem. and it just shows how you can use global optimization to guarantee you have the very best possible via the parameters, and then they'll show you what happens. So this is a case where the experimental data was measured by one of my students, James Taylor, and that's the thing with the error bars, and he did a lot of hard work to make sure he had pretty good error bars and he really knew what they were. And so he had those points. So he measured a lot of points. This measuring a lot of points is a key thing. So it means his degrees of freedom is very large. Because he uses many, many different time points along the way there and he has error bars on all of them. And so because the degrees of freedom are very large it actually makes it a very good constraint on the model. It tested very thoroughly. Where if you only measured one point, then that's not a very good constraint. Now what James had done, and we published a paper in 2004 where he did it, where he said, you know, when I do my fits this is what I get. I get these kind of purple and green things, and therefore they're all outside the error bars, and therefore I publish in the Journal of Physical Chemistry, I claim this model is wrong based on my experimental data. OK? So we published that in 2004, it was accepted by the referees. It's in the literature, you can read it. Don't believe it though, because Adam singer went back and did global optimisation using the same model and his red line goes right through all the points really well. Do you see that? You can see the red line there. See this red curve? That's the true best fit. And so all the fits that James had found were all local minima in the chi-squared squared surface. And when he actually found the best fit with Adam's fancy method, then it turns out that actually the model matched the data perfectly. So that's definitely true. So what we published the first time was completely wrong, because we said the model disproved the data. But in fact the model proved the data. I mean the data proved the model, or was completely consistent with the model in every way in great detail. So I was a little embarrassed. So then James, also in the same paper, had found-- and they measured another condition in a slightly different case, and actually was looking at a different solvent. And in this case he found this green curve as his best fit, and it looked pretty good by eye, though you can see it sort of skittering on the tops of the error bar bands. And so this is one where he said, you know, we do the statistical test, there's like a 5% chance that you would measure data like he measured if the green model was the truth. And so we said some wishy-washy words in the paper, basically because we weren't really if it was consistent or not consistent with the data. But in that case Adam used his fancy global optimization techniques and found that James' fit was actually not that far from the global optimum, and he got the red curve there, marked by the red arrow, and that one looks pretty good, it actually goes through the data. But it doesn't really do it right, and found that the confidence intervals are only 16%. So it's only a 16% chance, one chance out of six, that James would have measured the data he measured if that model was true. And now we're sure that we have the global best fit, so now we're, like, thinking well, you know, maybe we should say that the model is not true for this case. Because it's an 84% chance it's not true anyway. So actually we got it wrong both ways. So in the first case we said that our data disproved the model. And in the second case, we said that our model was pretty good with the data, but actually they were both wrong. So one was wrong that actually-- the data did prove the models correct for the first case and it actually looks a little fishy for the second case. So anyway this is just a concrete example to make you be a little bit more worried about what you do when you do these fitting things. Now there's a variety of ways to try to do the least course fitting to try to find optima. There was a guy named Carr from University of Minnesota. And he published this [INAUDIBLE] way to just try a whole lot of different initial guesses, and that usually works to find the global minimum. But if you really want to have a guaranteed global minimum that you're absolutely sure is definitely is the global minimum, then you really need to use global optimization methods and you really should talk to professor Barton. And for these kinetic models he actually has a distributed code called GDOC that will just find the global optimization as long as you don't have too many kinetic parameters. I think if you go to about 6-- around 6 adjustable parameters it can handle. If you have more than that then usually typically not. Because the problem with global optimisation methods is that they scale exponentially with the number of adjustable parameters, or the number of things you're trying to optimize. Now on the other hand, if you have a model that you need to use more than six adjustable parameters to fit your data, then you might want to try to think of a different experiment. Because you know the famous saying is like, you know, give me N parameters I can fit an elephant. So if you have more than six parameters and you're trying to fit some data set, probably if you do it right you'll find a fit, but it may not really prove anything. So from that point of view professor Barton's code, GDOC, is pretty convenient for doing these non-linear optimizations. So that's a code for non-linear optimization with differential equations embedded. And they can be stiff and they can be large, it doesn't matter. What really matters it's just the number of adjustable parameters, that's the cost. All right now there was a really good question, that you asked I think, in class last time, maybe. Maybe the time before. And it was what happens-- so let's look at different case you get. So one case you get, you know, if chi-squared square, theta best, your best fit parameters is a lot bigger, think chi-squared max, then you're ready to say the model disproves it. So that one you're OK. If chi-squared of your best fit is less than, or significantly less than chi-squared max-- it has be much less than it, really less, not equal. --then you're really happy to say that the models are consistent. The model and data are consistent to each other. And, in this case, what you would normally do next is determine the confidence intervals on the thetas that you adjusted. Right? And if you read the chapter in Numerical Recipes, for example, some of the notes online, it could tell you how to do this. Right? And so that was OK? It's OK. So the case that's really problematic is you find chi-squared theta best, and it's less than chi-squared max, so you think the model is consistent with data, but it's pretty darn close to theta-squared max. And so then you have to figure out, well, the weird aspect of this is I now have-- here's my famous plot of theta one and theta two. So I have two parameters I'm adjusting. If they want theta two, here's my best fit. My best fit. If chi-squared max-- if the curve of constant chi-squared max is something like this, this is chi-squared of theta equals chi-squared max. If it's like that then I have some big region of a lot of different theta values that give pretty good fits, and I'm happy to draw confidence intervals, like draw lines here and here, for example, and say that's the confidence interval on theta one, and I'm all right. OK? And it's sloped here so I might want to tell people about the co-variance between theta one and theta two. All right? But I know what to do, it's fine. Now suppose this is the case that's like this. Where chi-squared theta best is significantly less than chi-squared max. Now suppose instead chi-squared best was very close chi-squared max. So then here is chi-squared max. This is chi-squared of theta equals chi-squared max, this is the good case. Here's the bad case. So in the bad case, there is some region of thetas that the model is consistent with the data, but it's very tiny. There's a little tiny range of theta values that would make the models consistent with the data. Now what's weird about this is that my best fit, it's not very good. The chi-squared is almost up to the maximum chi-squared I would tolerate. So it means my deviations between my model and data are pretty bad. Like this plot actually. OK? And so I'm, like, you know, it could-- it's possible. I was a little bit unlucky and my data set is pretty far off but still within the possible range. And so maybe the model data are consistent. But then when I try to compute the confidence intervals on my parameters, there's only a really small range of parameters that are going to give good fits here. Because the best possible fit is this red line. There's no choice of the parameters that's going to go through all the circles with this model. So now I'm like, what's going on here? So if I went ahead and do the same way I did before, I draw my confidence intervals like this and say, OK. Say, wow, I've done an amazingly great job of determining theta one. I know theta one very, very, very precisely. And the same for theta two. Here's theta two, it's got to be in that range and I've really done a great job. So I'm awesome. But I'm only awesome because my original best fit wasn't very good. So this might make you feel a little sick in your stomach if you're getting ready to publish in nature. Right? And then if the reviewers are on their game, you might get something really scathing back from the reviewers. So there's kind of two ways to play it at this point. One way is, you are absolutely sure that your equations are correct. You are Sir Isaac Newton, you have just written the law of gravitation. You are measuring the orbits of the moons of Jupiter, and you are sure that your equations are right. And so, you know, my problem is my antiquated telescopes of 1600 are not that great, and so I think I have maybe-- maybe I misestimated my error bars on my measurements. Maybe I have some aberration or something, the angles are off, right? My twisted telescope looks as if the star moves or something, right? OK? So that's one possible way to go, say I really believe the model, and so I believe this is the truth, and I really believe my chi-squared max to be what it is, and I double checked the error bars and I think it's true. So I think have done a great job and I should publish it in Nature because I have determined these parameters with 17 more decimal places than anybody ever could before. OK? So that's one way to look at it. The other way to look at it is what's recommended in the Numerical Recipes book. And they say don't use-- instead of writing the equation to be chi-squared of theta has to be less than chi-squared max, they say when you're computing the confidence intervals, instead write chi-squared of theta has to be less than chi-squared best fit plus chi-squared max. So they say don't use a stinky little circle here, draw a nice big one around there. And what is this saying? I guess what this is saying is I don't believe that my poor agreement between the model and data justifies me declaring very narrow parameter values. So I just use the kinds of parameter value confidence intervals that it would have actually computed a priori when I do my experimental design. I think that's really how well I can determine stuff. And I think something is wrong that my chi-squared best fir is so bad. Maybe I didn't do my global optimization right and so I have the wrong parameters. Maybe I mis-estimated my error bars and so my chi-squared are all off. Maybe-- I don't know what else happened. Something happened, and I don't believe I'm that good, so therefore I'm going to quote much broader confidence intervals, which means much less precision in my determination of parameters. And that's what the Numerial Recipe guys recommend you should do. But you know, it's up to you. And I see it both ways. So the people who won the Nobel Prize for the dark energy, do you guys know about this? So if you ever saw the data there. So that's one where they said-- they were using Einstein's theory of relativity, General relativity. They say we believe Einstein, we believe that equations true. And we're not going to, you know, the fact that the data only matches when you choose a certain value of the cosmological constant, we're just going to go with it. And we're reporting that's the value of the cosmological constant. Now some other people might say that's kind of goofy theory. Einstein changed his mind six times about whether to put the cosmological constant in the equation at all, so I'm not so confident and I would report something different. But they went with their gut. They said we believe Einstein. They went with it. The Nobel Committee agreed, and they got the Nobel prize. So you know, you're taking your chances. The Numerical Recipe guys are more like engineers, they say, oh, we don't believe models that much anyway. Let's just use more conservative confidence intervals. Any questions about this? OK. All right. So now let's change topics. And we'll change to [INAUDIBLE] splitting. So if you remember back when we were talking about boundary value problems, we're often trying to solve problems that are like this. All right. So this is the conservation equation for a species in a reacting flow situation. So you have some convection, you have some diffusion, you have some reactions. Now in my life I have a lot of models. I have about 200 species in them. So this K is 200 different variables. And then you have these equations, and on top of them you have [INAUDIBLE] equations for that momentum conservation and continuity. And so this is actually a pretty bad set of equations. And typically the chemistry is stiff. Because I have chemistry time scales it'll be, like, in picoseconds, sub-nanosecond time scales. And I'm caring about simulating-- well in my case maybe I have an advanced new engine burning some fuel. And so we're running over a simulation of a piston cycle, maybe 10 milliseconds, but the chemistry is sub-nanosecond, so then you multiply it out. So that's like I'm going to 10 to minus two seconds down to 10 to minus 10 seconds. That's eight orders of magnitude difference in time scales. So I'm in a bad way trying to solve this equation. And I have 200 of these guys coupled to each other. And then I have to think about what my mesh looks like. And I'm trying to simulate inside a piston, actually I have a moving mesh because the pistons compressing, so the equations are kind of complicated even with what the mesh is doing. And then I need a bunch of mesh points there because the-- do you guys hear the Kolmogorov scale? You guys learned this yet? It's a scale like the minimum eddy size if you have a turbulent flow. And that's pretty darn small also. So it might be sub-micron. And so I have a natural, lens scale that's like microns maybe. So 10 to minus six meters, but the piston diameter is couple centimeters, so again I have like four orders of magnitude between the mesh size I need to resolve the physics and the size of the thing. So I need about 10,000 points across the cylinder, but it's a 3-D problem, so I need 10 to the fourth, times 10 to the fourth, times ten to the fourth, so I need 10 to the 12th mesh points. And then I have 200 variables, say variables at each mesh point, so I have 10 to the 14th stayed variables. So how much memory you have in your computer? How much ram? You have probably like 16 gigabytes. So you have ten to the ninth- tenth to the ninth bytes, is that right? No-- what's giga? Ninth? OK, so you have ten to the tenth. Sorry. Ten to the tenth byes, but I have 10 to the 14th stayed variables. So I have a problem. So then you need to figure out how you solve it. So there's two schools of thought of how to solve these equations. The school thought which has actually been the most successful so far, but is not readily available, is to get the government to build you the biggest possible computer in the world with as much memory as possible. And then you can actually keep track of your state vector, all 10 to the 14th elements, and you reserve this whole computer for yourself for, your know, a month. And just run it. And you figure out how to parallelize all the calculations, and you just use an explicit solver. And so that way it's just straightforward. All you're doing is you're computing, you know y of T plus delta T is equal to something, and it's just an explicit formula ode45, fancy version of that. OK? And it's just algebra, and you just add them up and it's no problem. But this is a limited feasibility thing. This only became possible about three years ago when they built a computer that actually had enough memory to store the state factor. It's also-- if the equations are too stiff it still doesn't work, because the step size and time step you need to use is so small-- so say it's 100 picoseconds and you're trying to get it out to a millisecond, so you need 10 to the eighth something time steps, which means you need to double precision numbers all the way through. But if the government builds you a 64-bit 128-bit machine then you can do it. But anyway this is like a very special kind of approach. So now most people, including me I don't have access to that kind of computer power. Actually if you want to read about that there's a woman who does that, who is Jacqueline Chen, and you might want to look on the internet and see about her. And somehow she became buddies with the people in charge of the super computers, and so they're happy to give her a month of, you know, the best computer in the world every year, and she runs some really awesome calculation that's exactly right. Solves everything, has all the state variables ten to the 14th, and that's fantastic. And currently that's the benchmark for how people test approximate methods for how to solve this. Because she actually has the full solution numerically converged. But this again is a limited applicability. So a lot of people are working on approximation methods to try to solve this equation if I don't have access to such awesome computer resources. And the general idea of them-- well the two ideas. So one idea is somehow get rid of the turbulence. OK? And so what they do is they use a mesh coarser than what you what you would need to resolve the Kolmogorov scale, and then they use models for what happens for itty bitty tiny eddies. And those are called sub-grade scale models, and this whole approach is called large eddy simulation. And the idea is you keep track of the large eddies, the big recirculation zones that's modelled explicitly, but you don't keep track of all the itty bitty tiny swirling off little eddies that lasts for a little second, a little bit of a period of time before they dissipate because of diffusion. OK? So that's the concept. And there's, I don't know, half of all of the mechanical engineers who work in large eddy simulations are trying to figure out how to do this. Maybe Jim as well, I don't know. No. OK, but at least he knows some of them probably. So that's one whole branch. It's to try to get rid of the turbulence. And basically that's trying to reduce the density of mesh I need. So trying to get rid of this-- use the spatial mesh. But the other issue is the time mesh. And the time is even worse, right? Because I said I have ten to the eighth time points and I only need ten to the fourth spatial points in the dimension. So in some ways, if I can do something with the time, that really can buy me a lot. I have eight orders of magnitude I'd like to try to reduce to three orders of magnitude if I could, that would be really nice. That idea is to say, well let's separate the fast timescales from the slow time scales. And in fact all the fast time scales are mostly all chemistry, particularly after I changed the eddy simulation where I get rid of the really small spatial scales, then the time scale of diffusion over this big mesh is kind of slow. And so I can go to a time scale that's completely controlled by chemistry. So we already told you, how do you solve stiff differential equations? Use a stiff solver, right? And there are special methods, there are implicit methods, and there's specialized solvers. And what they do is they allow you to take larger time steps than you would be allowed to use with explicit methods where you would run into numerical instabilities. If you try to use the explicit method with a big time stamp, it's much bigger than the real physical time scale, you'll get some crazy oscillations. But if we use the stiff solvers you can get away with time steps that are much, much larger than the real physical scale. So that allows you to go from 100 picoseconds up to maybe 100 nanoseconds. Get rid of three orders of magnitude of time step by using a stiff solver. So that's pretty good, three orders magnitude reduction? Well go for it. It's not only reducing the CPU time potentially, but it's even more importantly reducing the number of points you have to save, because when you run this calculation what Jacqueline Chen ends up with is a list, y at every x, xn, yn, el, times e, something like this. Alpha, I don't know. That's the output of her program. OK, so how big is this object? This is 200 times 10 to the fourth, times 10 to the fourth, times 10 to the fourth, times 10 to the eighth. Right? That's how many numbers she's going to get out of her simulation. So, actually, her biggest problem is not actually solving the simulation. As long as she can get the one month of CPU time she's good. She's got to solve. Her Problem is how to analyze her data. Because now she has this much data that she has to figure out to say something rational about it. And so she has like a whole army of post-docs whose whole job is trying to invent new day interrogation methods. And also this data set is so large it cannot be stored on a hard disk. In fact it cannot be stored in the biggest farm of hard disks. So she has to, as the calculation is running, send the data to different farms of hard disks around the world and filling them completely up, and then sending them to the next one and filling them up. So she has fragments of this data set stored all over the place. And then if she wants to make a graph she has to have a special graphs software that looks inside this data farm, get some numbers, look inside this data farm and puts it all together in some way. So this is a pretty hard project. OK? Now if I can use a stiff solver I don't need to use so many time stamps. So I can get this down, say 10 to the fifth at least I got rid of three orders of magnitude in my data. Right? So now I just need one data farm maybe, instead of, you know, using all the ones at Amazon and Google own or something. OK? So this is a pretty big advantage to do it this way. Now the disadvantage, as you know, is solving implicit differential equations is way more expensive, right? Because you have to solve an implicit non-linear algebraic equation at every time step. And in this case it can be a pretty complicated equation. Because if I have this many state variables, so 10 to the fourth, times 10 to the fourth, times 10 to the fourth, so 10 to 12th times 200, so 10 to the 14th at each time step, that's my current state. And I'm trying to solve a non-linear algebraic equation, 10 to the 14th unknowns, you need a pretty good solver. I mean, I know backslash is good and F [? solved ?] and stuff, but I really don't think it's going to work. So that's a really serious problem. So the next trick is to say, OK, well this stiffness is only happening because of the chemistry. So I don't-- I could solve all of the finite volumes separately. So the idea is to split the problem up. I'm going to solve the chemistry using a stiff solver only at one point a volume at a time. And then I'm going to do the transport some other way. So what I'm going to do is I'm going to rewrite this equation as My unknown is dy to t, is equal to some kind of transport term and some kind of reactant term. And this term is local. All this in this equation, this is really same dy at position x depends on this at position x. Whereas if you compute these guys I need to do finite differences or something from adjoining finite volumes. Right? I have to do fluxes from the adjoining volumes. So these guys are sensitive not just to my local value of y at this finite volume, but also to the neighboring values of y. But if I don't worry about them, if I break it up I have this local thing and this non-local thing. This one is local, it's stiff. This one is non-local, but it's not so stiff. And often it's linear or nearly linear. Because the transport terms are mostly linear. And so I can use-- I have specialized solvers that people have worked on for many years for each of these things. So for the stiff thing I have things like ode15s, I have DAEPACK and [? DASPAK ?] and [? DASL ?] and sundials and all those programs that people have written and spent a lot of effort to solve this problem. And similarly there's a whole bunch of people who have spent their whole life trying to solve the convection diffusion problems, the non-reacting flow problem. And so I can use specialized solvers that they have. So I want to split this up. And what I really would like to solve is I'd prefer to solve dy/dt equals r and dy/dt equals t completely separately if I could. Because I can use specialized methods that are the best for each. So not to figure out, well this is not really the real equation because it has both of them added together. How can I handle that? So this operator splitting recipe is solve dy/dt equals to t for a while, and at the end of this process I'll have a [? yat ?] final. And then I'll solve this equation with y equals y0. That's the equation system of solving, of transport. I do some transport for a while, I get some values for the y, then I take these guys and I solve the reaction equation starting from y equals y final. How to write this? One or two 0. OK? And I solve this one for a while and I'll get a different, you know, a somewhat different [? yrt ?] final. All right? And the simplest way of putting things is just keep going back and forth. Boom, boom, boom, boom, boom, boom, boom, boom. And in the limit where I make this t final minus t0 to be very small, then maybe I might expect that they might converge to the same solution as the couple of questions. All right? So that's the concept. Now, if you do it this way it doesn't work very well. You need to make this delta t incredibly small to get any accuracy. However you can do a slightly more clever version, which is I saw dy/dt equal t starting from yt0, so y0, and I solve this to me y at t plus delta-t over 2. Then I solve dy/dt is equal to r, starting from y of t0 equal to, call this y star. y star and this I integrate out to y of t plus the full delta t. I'll call that y double star. And then I solve this one again, and I solve starting from y of t0 equal to y double star, and I integrate that out for another delta t over 2. And this one I called my y final. And that's the one I use. So I take a half of a transport step, sort of like I'm starting to do the transport, then I suddenly-- oh, hold on. I got you my reaction. I do my reaction, then I do the other half of my transport after that. And if you can do the analysis of this, it turns out this thing converges to the true solution of the couple equations with second order accuracy. So you might make this delta t, if I cut it in half the precision increased by factor of four. If I cut if by a factor of 10 it increased by a factor of 100. So this is looking more promising as a way to do it. This is the most popular way to do it. So this recipe here for operator splitting is called Strang splitting. Named after Gill Strang, he was a professor in applied mathematics department here. Former head of CIAM, the international body for applied mathematics. And also author of a fantastically great linear algebra textbook which maybe some of you guys might have seen some videos of his. So he invented this 1968, I was five years old. He didn't know that his invention was of any practical use, being a mathematician. So he just published that paper. Now in the meantime, this paper is like a citation classic. It's cited like, I don't know, 20,000 times. He had no idea. He never looks at citation list, he's a mathematician. So I was working on this problem and-- let's see-- let's back up. So this is how he usually solves it. Right? We usually solve the problems, the-- splitting, sorry. Just be able to see it. And we like to solve it this way because I can use specialized solvers. I use a transport solver, I use a stiff solver here. This one I can parallelize because I can do a different ode solution at each finite volume. So if I have a big computer, a big parallel computer, say I have 10,000 nodes, I can solve 10,000 finite volumes simultaneously in this step, so the CPU time is reduced a lot because it parallelizes easily. So this is really popular. This is the main way that people do things. And then in the ideal case you do it with a certain guess of delta t, then you try delta t divided by five. And if you get the same result then you publish it. OK? And you've converged your solution. However, what I found doing this is I was trying to find flames. I was trying to calculate the positions of flames. And what I found was that with the delta t's that I would try, sometimes I would get a qualitatively different solution depending on the delta t I chose. And so in particular what I found was that I was solving burner flames. So I have, like, a burner here, and it had a flame on top of it like this. And there's two importantly different cases. One is where the flame is floating above the burner. And one is where the flame kind of goes all the way and touches the metal. And you've probably seen both those cases, like a bunsen burner or something like that. Sometimes if you get the flow rates high enough, the whole thing lift off and be floating there. And sometimes if you slow the flow rates down they say that they will anchor to the metal. Some piece the flame will actually touch the metal. Usually it's like if you have a-- a lot of times you have a case like this. You have like a metal piece, metal piece, here's your mixture coming in, out here is the air, and a lot of times the flame will actually touch this metal piece here. You know, the flame sort of looks like that. Have you ever seen a flame like this? Anyways, this is a common thing, anchored-flame. If you just have a slow speed burner that's what you get. If you go high speed on the burner you can make the whole thing lift up and sort of dance around if you look at the flame. All right? You could probably see the same if you look at a piece of wood burning. If it's burning well the flame will be lifted off the wood, you'll see a gap between the flame and the wood. If it's not burning that well you can see bits of the flame actually touch the pieces of wood. OK? So that's a very important thing for a commission guy. So was really-- to me this is a big deal. So when I solved it with one choice of delta t I get the solution. When I solved with another choice of delta t I get the solution, with a nice big gap. Now they both can't be right. And so I was a little bit alarmed because it's like, it's really different, right? So I thought, you know, I didn't really care that much about the dynamics of my problem. I just wanted to make sure that the steady state flame I ended up with was the right one. So if you're interested in steady state problems this is a really important thing. You want to make sure that your solution really converges to the true steady state. So what I demonstrated primarily is the Strang splitting method is not reliable to converge steady state. Now I believe I made the delta t tiny enough, probably I would converge the [INAUDIBLE] steady state. But I don't know how tiny I have to go, and I can't really judge my convergence criteria very well, because at some point of delta t is it suddenly jumps from one solution to another solution that's qualitatively and completely different. So the convergence is not smooth and continuous, it's like it's converging, converging, converging, jumps to some other solution. And I don't know where the jump happens. And I don't know if I keep going further, will it jump to something else? So this makes me very loathe to publish the paper. Right? So I don't know. So I was agitated about that. So I start working on trying to fix this splitting. The problem was that Strang is really smart. So his splitting method is really good. It's stable, it's pretty accurate, it's really hard to beat it actually. And so I published the papers and they were that good because they really didn't really improve it that much. But then I had a really smart post-doc named Ray Speth, now works in the AeroAstro department. And he came up with a different way to look at this. And he said, you know, if I had a problem that's like this, I can always add some constant to this and subtract the same constant from this, and I'm really getting the same problem. And so now the thing is, can I cleverly choose this constant so that, for example, my steady state solution is going to be good? So how to figure out what constants to choose? And so the methods where you add these constants are called balancing methods. So it's trying to balance the operator splitting scheme. And what Ray's method-- pretty simple balancing method. So this is called simple balancing. Let's see if I have a slide that shows it. There we go. Here we go. Simple balancing. You choose a c to be the average of your right hand side at the previous time step. OK, now idea here is that as you're getting close to a steady state, the previous time step is not going be that much different than the current time step. And so this average number is going to stay more or less study as you as you get close. And so what you're really doing is adding the, sort of, the average of these two guys to each of them from the previous step, and that has the consequence of really changing the solution. So let me just show you. So the left hand side is what Strang splitting does. The blue chips are the transport, and the green steps are the reaction. And what's happening is I did a half a transport step, then I do a reaction. The reaction sort of jumps me off the trajectory, then brings me back to the trajectory, then overshoots because I do a whole delta t step of reaction. And then I have to do a half step of transport back. And so I get from the black-- one black dot is the first point, and the other black dot is what I call y final. And that's showing what happens. And it just repeats like that over and over again. And it makes sense. If you're close to a steady state, then if I'd just take one of these terms, without the balancing theta transport, it's going to head away from steady state, because at steady state the transport and the reaction are balancing against each other. Typically you have something transporting in and is being consumed by the reaction. Or something is being created by reaction and is being transported away. And they have opposite signs. If I only put one in that has a certain sign, then I'm going to move away from the steady state. And then I need the other one to bring it back. And the reason why Strang splitting has trouble is it's doing these huge excursions away from the steady state at each time step, with the hope that it ends up back at the steady state. Right? That's the way it's supposed to work out. And to second order it does, that's because it's pretty clever about how it works. But it does it's crazy excursions. Now if you have a non-linear model, the further you do excursions from the real trajectory the bigger the area you get. So if you do that balance method, you can see what happens is that when you get to steady state all the steps are just moving you along the steady state trajectory. They're basically maintaining the steady state, and that's because I added-- it's like transport plus half of transport plus r or something like that. And what is it? There's probably a minus sign here. And so I'm getting rid of half of my transport and I'm adding in half of this. So I changed-- before I had dy/dt is equal to t, but now I have dy/dt is equal to 1/2 of t. I feel I need a sign to stick here. So half of t minus f of r, something like that. And the-- this can't be right. I must have a sign. Maybe this is not the average it's a half. It should be minus. It's like the-- these things should be added, and these guys are you cancelled out, so basically [INAUDIBLE] is going to be zero, which it should be as a steady state. So I'm sorry, that previous slide had the sign wrong. This is what happens as the solution. If you do Strang splitting on the left you get more accuracy as you cut delta t, but it's only-- it's not getting any better. It doesn't matter if it's near steady state or away from steady state, it treats everything equally, the Strang splitting. In the balance splittings, as you get to steady state, all of a sudden the errors drive exponentially. You see this litte logarithm of the error? Logarithmic plot. So you get tremendously great convergence to the real steady state once you get anywhere close to the steady state. Now the cost of this is that because the formula, which has a sign mistake I think, it depends on yn, it makes the whole thing more numerically unstable. Because it's like an explicit thing. We're using what happened at a previous time to correct our equation for future times. And so you are more likely to get numerical instabilities. So then Ray invented an implicit version of that, and also a higher order method, and that's called re-balance splitting. The equation is way too complicated for me to write down, since I can't even write down the minus signs correctly in this one. But you're happy to read the paper, it's in CIAM. And he has the equations for that. And when you put the second order one in, that's implicit, then the thing is just as stable as Strang splitting, but it also has this great property that as you get close to steady state it exponentially converges to the solution, which is sort of like how Newton's method is really so good. It's sort of like that. You get a lot more decimal places really fast once you get close to steady state. All right, that's enough. Enjoy Thanksgiving. I posted a bunch of readings and things about metropolis Monte Carlo and stochastic stuff that we're talking about next week, and also a little bit more about models versus data. So do the reading when you get a chance. Say hi to your families, say hi to your friends. Have a nice time. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.